Jul 2 00:21:52.092687 kernel: Linux version 6.6.36-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 22:47:51 -00 2024 Jul 2 00:21:52.092732 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:52.092748 kernel: BIOS-provided physical RAM map: Jul 2 00:21:52.092761 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 2 00:21:52.092772 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 2 00:21:52.092783 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 2 00:21:52.092801 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Jul 2 00:21:52.092814 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Jul 2 00:21:52.092826 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Jul 2 00:21:52.092838 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 2 00:21:52.092850 kernel: NX (Execute Disable) protection: active Jul 2 00:21:52.092862 kernel: APIC: Static calls initialized Jul 2 00:21:52.092874 kernel: SMBIOS 2.7 present. Jul 2 00:21:52.092887 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Jul 2 00:21:52.100337 kernel: Hypervisor detected: KVM Jul 2 00:21:52.100357 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 00:21:52.100372 kernel: kvm-clock: using sched offset of 6820182006 cycles Jul 2 00:21:52.100388 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 00:21:52.100402 kernel: tsc: Detected 2499.996 MHz processor Jul 2 00:21:52.100417 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 00:21:52.100433 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 00:21:52.100450 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Jul 2 00:21:52.100465 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 2 00:21:52.100479 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 00:21:52.100494 kernel: Using GB pages for direct mapping Jul 2 00:21:52.100508 kernel: ACPI: Early table checksum verification disabled Jul 2 00:21:52.100522 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Jul 2 00:21:52.100536 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Jul 2 00:21:52.100551 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:21:52.100565 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jul 2 00:21:52.100583 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Jul 2 00:21:52.100597 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:21:52.100611 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:21:52.100625 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Jul 2 00:21:52.100639 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:21:52.100654 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Jul 2 00:21:52.100668 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Jul 2 00:21:52.100682 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Jul 2 00:21:52.100699 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Jul 2 00:21:52.100713 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Jul 2 00:21:52.100734 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Jul 2 00:21:52.100749 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Jul 2 00:21:52.100763 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Jul 2 00:21:52.100779 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Jul 2 00:21:52.100797 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Jul 2 00:21:52.100812 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Jul 2 00:21:52.100827 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Jul 2 00:21:52.100842 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Jul 2 00:21:52.100857 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 00:21:52.100872 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 00:21:52.100887 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Jul 2 00:21:52.102976 kernel: NUMA: Initialized distance table, cnt=1 Jul 2 00:21:52.103004 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Jul 2 00:21:52.103028 kernel: Zone ranges: Jul 2 00:21:52.103042 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 00:21:52.103057 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Jul 2 00:21:52.103072 kernel: Normal empty Jul 2 00:21:52.103087 kernel: Movable zone start for each node Jul 2 00:21:52.103102 kernel: Early memory node ranges Jul 2 00:21:52.103116 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 2 00:21:52.103131 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Jul 2 00:21:52.103146 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Jul 2 00:21:52.103164 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 00:21:52.103179 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 2 00:21:52.103194 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Jul 2 00:21:52.103208 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 00:21:52.103223 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 00:21:52.103237 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Jul 2 00:21:52.103252 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 00:21:52.103266 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 00:21:52.103281 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 00:21:52.103298 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 00:21:52.103313 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 00:21:52.103327 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 2 00:21:52.103342 kernel: TSC deadline timer available Jul 2 00:21:52.103357 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 00:21:52.103371 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 2 00:21:52.103386 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Jul 2 00:21:52.103400 kernel: Booting paravirtualized kernel on KVM Jul 2 00:21:52.103415 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 00:21:52.103430 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 00:21:52.103448 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Jul 2 00:21:52.103463 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Jul 2 00:21:52.103477 kernel: pcpu-alloc: [0] 0 1 Jul 2 00:21:52.103491 kernel: kvm-guest: PV spinlocks enabled Jul 2 00:21:52.103506 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 00:21:52.103522 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:52.103537 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:21:52.103555 kernel: random: crng init done Jul 2 00:21:52.103569 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:21:52.103583 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 00:21:52.103598 kernel: Fallback order for Node 0: 0 Jul 2 00:21:52.103612 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Jul 2 00:21:52.103627 kernel: Policy zone: DMA32 Jul 2 00:21:52.103641 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:21:52.103729 kernel: Memory: 1926204K/2057760K available (12288K kernel code, 2303K rwdata, 22640K rodata, 49328K init, 2016K bss, 131296K reserved, 0K cma-reserved) Jul 2 00:21:52.103747 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:21:52.103766 kernel: Kernel/User page tables isolation: enabled Jul 2 00:21:52.103781 kernel: ftrace: allocating 37658 entries in 148 pages Jul 2 00:21:52.103796 kernel: ftrace: allocated 148 pages with 3 groups Jul 2 00:21:52.103811 kernel: Dynamic Preempt: voluntary Jul 2 00:21:52.103826 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:21:52.103842 kernel: rcu: RCU event tracing is enabled. Jul 2 00:21:52.103857 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:21:52.109984 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:21:52.110002 kernel: Rude variant of Tasks RCU enabled. Jul 2 00:21:52.110017 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:21:52.110044 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:21:52.110058 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:21:52.110072 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 00:21:52.110086 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 00:21:52.110101 kernel: Console: colour VGA+ 80x25 Jul 2 00:21:52.110115 kernel: printk: console [ttyS0] enabled Jul 2 00:21:52.110129 kernel: ACPI: Core revision 20230628 Jul 2 00:21:52.110144 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Jul 2 00:21:52.110158 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 00:21:52.110175 kernel: x2apic enabled Jul 2 00:21:52.110190 kernel: APIC: Switched APIC routing to: physical x2apic Jul 2 00:21:52.110215 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 2 00:21:52.110233 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Jul 2 00:21:52.110248 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 00:21:52.110263 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 00:21:52.110278 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 00:21:52.110292 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 00:21:52.110307 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 00:21:52.110321 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 00:21:52.110336 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 00:21:52.110351 kernel: RETBleed: Vulnerable Jul 2 00:21:52.110369 kernel: Speculative Store Bypass: Vulnerable Jul 2 00:21:52.110384 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:21:52.110398 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 00:21:52.110413 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 00:21:52.110427 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 00:21:52.110442 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 00:21:52.110460 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 00:21:52.110475 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Jul 2 00:21:52.110489 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Jul 2 00:21:52.110504 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 00:21:52.110519 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 00:21:52.110534 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 00:21:52.110548 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Jul 2 00:21:52.110563 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 00:21:52.110578 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Jul 2 00:21:52.110592 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Jul 2 00:21:52.110607 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Jul 2 00:21:52.110625 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Jul 2 00:21:52.110639 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Jul 2 00:21:52.110654 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Jul 2 00:21:52.110669 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Jul 2 00:21:52.110683 kernel: Freeing SMP alternatives memory: 32K Jul 2 00:21:52.110698 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:21:52.110713 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Jul 2 00:21:52.110726 kernel: SELinux: Initializing. Jul 2 00:21:52.110741 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:21:52.110756 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 2 00:21:52.110772 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 00:21:52.110784 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:52.110803 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:52.110819 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Jul 2 00:21:52.110833 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 00:21:52.110846 kernel: signal: max sigframe size: 3632 Jul 2 00:21:52.110861 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:21:52.110877 kernel: rcu: Max phase no-delay instances is 400. Jul 2 00:21:52.110903 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 00:21:52.110917 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:21:52.110930 kernel: smpboot: x86: Booting SMP configuration: Jul 2 00:21:52.110947 kernel: .... node #0, CPUs: #1 Jul 2 00:21:52.110963 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 00:21:52.110978 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 00:21:52.110992 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:21:52.111005 kernel: smpboot: Max logical packages: 1 Jul 2 00:21:52.111019 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Jul 2 00:21:52.111033 kernel: devtmpfs: initialized Jul 2 00:21:52.111047 kernel: x86/mm: Memory block size: 128MB Jul 2 00:21:52.111064 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:21:52.111080 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:21:52.111095 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:21:52.111109 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:21:52.111124 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:21:52.111139 kernel: audit: type=2000 audit(1719879711.111:1): state=initialized audit_enabled=0 res=1 Jul 2 00:21:52.111154 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:21:52.111169 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 00:21:52.111184 kernel: cpuidle: using governor menu Jul 2 00:21:52.111202 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:21:52.111217 kernel: dca service started, version 1.12.1 Jul 2 00:21:52.111290 kernel: PCI: Using configuration type 1 for base access Jul 2 00:21:52.111306 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 00:21:52.111322 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:21:52.111337 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 00:21:52.111352 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:21:52.111367 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 00:21:52.111383 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:21:52.111401 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:21:52.111417 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:21:52.111432 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:21:52.111447 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 00:21:52.111463 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 2 00:21:52.111479 kernel: ACPI: Interpreter enabled Jul 2 00:21:52.111494 kernel: ACPI: PM: (supports S0 S5) Jul 2 00:21:52.111510 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 00:21:52.111526 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 00:21:52.111545 kernel: PCI: Using E820 reservations for host bridge windows Jul 2 00:21:52.111561 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 00:21:52.111577 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 00:21:52.115060 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:21:52.115239 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jul 2 00:21:52.115368 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jul 2 00:21:52.115388 kernel: acpiphp: Slot [3] registered Jul 2 00:21:52.115411 kernel: acpiphp: Slot [4] registered Jul 2 00:21:52.115427 kernel: acpiphp: Slot [5] registered Jul 2 00:21:52.115530 kernel: acpiphp: Slot [6] registered Jul 2 00:21:52.115550 kernel: acpiphp: Slot [7] registered Jul 2 00:21:52.115566 kernel: acpiphp: Slot [8] registered Jul 2 00:21:52.115582 kernel: acpiphp: Slot [9] registered Jul 2 00:21:52.115598 kernel: acpiphp: Slot [10] registered Jul 2 00:21:52.115614 kernel: acpiphp: Slot [11] registered Jul 2 00:21:52.115630 kernel: acpiphp: Slot [12] registered Jul 2 00:21:52.115645 kernel: acpiphp: Slot [13] registered Jul 2 00:21:52.115665 kernel: acpiphp: Slot [14] registered Jul 2 00:21:52.115680 kernel: acpiphp: Slot [15] registered Jul 2 00:21:52.115696 kernel: acpiphp: Slot [16] registered Jul 2 00:21:52.115712 kernel: acpiphp: Slot [17] registered Jul 2 00:21:52.115728 kernel: acpiphp: Slot [18] registered Jul 2 00:21:52.115743 kernel: acpiphp: Slot [19] registered Jul 2 00:21:52.115759 kernel: acpiphp: Slot [20] registered Jul 2 00:21:52.115774 kernel: acpiphp: Slot [21] registered Jul 2 00:21:52.115790 kernel: acpiphp: Slot [22] registered Jul 2 00:21:52.115808 kernel: acpiphp: Slot [23] registered Jul 2 00:21:52.115823 kernel: acpiphp: Slot [24] registered Jul 2 00:21:52.115839 kernel: acpiphp: Slot [25] registered Jul 2 00:21:52.115854 kernel: acpiphp: Slot [26] registered Jul 2 00:21:52.118967 kernel: acpiphp: Slot [27] registered Jul 2 00:21:52.118987 kernel: acpiphp: Slot [28] registered Jul 2 00:21:52.119003 kernel: acpiphp: Slot [29] registered Jul 2 00:21:52.119018 kernel: acpiphp: Slot [30] registered Jul 2 00:21:52.119034 kernel: acpiphp: Slot [31] registered Jul 2 00:21:52.119050 kernel: PCI host bridge to bus 0000:00 Jul 2 00:21:52.119294 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 00:21:52.119507 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 00:21:52.125972 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 00:21:52.126200 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jul 2 00:21:52.129046 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 00:21:52.129280 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 00:21:52.129444 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jul 2 00:21:52.129602 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Jul 2 00:21:52.129740 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 00:21:52.132179 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Jul 2 00:21:52.134276 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Jul 2 00:21:52.134562 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Jul 2 00:21:52.134710 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Jul 2 00:21:52.134859 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Jul 2 00:21:52.138115 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Jul 2 00:21:52.138649 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Jul 2 00:21:52.141011 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Jul 2 00:21:52.141184 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Jul 2 00:21:52.141327 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jul 2 00:21:52.141505 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 2 00:21:52.141669 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:21:52.141812 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Jul 2 00:21:52.149235 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:21:52.149869 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Jul 2 00:21:52.153969 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 00:21:52.153997 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 00:21:52.154017 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 00:21:52.154047 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 00:21:52.154066 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 00:21:52.154084 kernel: iommu: Default domain type: Translated Jul 2 00:21:52.154102 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 00:21:52.154121 kernel: PCI: Using ACPI for IRQ routing Jul 2 00:21:52.154140 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 00:21:52.154159 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 2 00:21:52.154177 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Jul 2 00:21:52.154471 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Jul 2 00:21:52.154627 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Jul 2 00:21:52.154768 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 2 00:21:52.154789 kernel: vgaarb: loaded Jul 2 00:21:52.154806 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Jul 2 00:21:52.154823 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Jul 2 00:21:52.154839 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 00:21:52.155933 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:21:52.155971 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:21:52.155992 kernel: pnp: PnP ACPI init Jul 2 00:21:52.156019 kernel: pnp: PnP ACPI: found 5 devices Jul 2 00:21:52.156038 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 00:21:52.156057 kernel: NET: Registered PF_INET protocol family Jul 2 00:21:52.156076 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:21:52.156094 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 2 00:21:52.156113 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:21:52.156132 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 00:21:52.156151 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 2 00:21:52.156273 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 2 00:21:52.156293 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:21:52.156311 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 2 00:21:52.156330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:21:52.156348 kernel: NET: Registered PF_XDP protocol family Jul 2 00:21:52.156547 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 00:21:52.156674 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 00:21:52.156799 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 00:21:52.159628 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jul 2 00:21:52.159824 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 00:21:52.159848 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:21:52.159866 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 00:21:52.159884 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Jul 2 00:21:52.159948 kernel: clocksource: Switched to clocksource tsc Jul 2 00:21:52.159964 kernel: Initialise system trusted keyrings Jul 2 00:21:52.159981 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 2 00:21:52.159997 kernel: Key type asymmetric registered Jul 2 00:21:52.160021 kernel: Asymmetric key parser 'x509' registered Jul 2 00:21:52.160037 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 2 00:21:52.160053 kernel: io scheduler mq-deadline registered Jul 2 00:21:52.160069 kernel: io scheduler kyber registered Jul 2 00:21:52.160085 kernel: io scheduler bfq registered Jul 2 00:21:52.160101 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 00:21:52.160118 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:21:52.160135 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 00:21:52.160151 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 00:21:52.160171 kernel: i8042: Warning: Keylock active Jul 2 00:21:52.160230 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 00:21:52.160246 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 00:21:52.160405 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 00:21:52.160537 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 00:21:52.161079 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T00:21:51 UTC (1719879711) Jul 2 00:21:52.162990 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 00:21:52.163028 kernel: intel_pstate: CPU model not supported Jul 2 00:21:52.163053 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:21:52.163070 kernel: Segment Routing with IPv6 Jul 2 00:21:52.163087 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:21:52.163103 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:21:52.163119 kernel: Key type dns_resolver registered Jul 2 00:21:52.163136 kernel: IPI shorthand broadcast: enabled Jul 2 00:21:52.163153 kernel: sched_clock: Marking stable (685027172, 275215548)->(1034242019, -73999299) Jul 2 00:21:52.163169 kernel: registered taskstats version 1 Jul 2 00:21:52.163185 kernel: Loading compiled-in X.509 certificates Jul 2 00:21:52.163205 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.36-flatcar: be1ede902d88b56c26cc000ff22391c78349d771' Jul 2 00:21:52.163221 kernel: Key type .fscrypt registered Jul 2 00:21:52.163238 kernel: Key type fscrypt-provisioning registered Jul 2 00:21:52.163254 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:21:52.163270 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:21:52.163286 kernel: ima: No architecture policies found Jul 2 00:21:52.163303 kernel: clk: Disabling unused clocks Jul 2 00:21:52.163319 kernel: Freeing unused kernel image (initmem) memory: 49328K Jul 2 00:21:52.163336 kernel: Write protecting the kernel read-only data: 36864k Jul 2 00:21:52.163355 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Jul 2 00:21:52.163371 kernel: Run /init as init process Jul 2 00:21:52.163387 kernel: with arguments: Jul 2 00:21:52.163403 kernel: /init Jul 2 00:21:52.163418 kernel: with environment: Jul 2 00:21:52.163434 kernel: HOME=/ Jul 2 00:21:52.163450 kernel: TERM=linux Jul 2 00:21:52.163466 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:21:52.163486 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:21:52.163511 systemd[1]: Detected virtualization amazon. Jul 2 00:21:52.163548 systemd[1]: Detected architecture x86-64. Jul 2 00:21:52.163753 systemd[1]: Running in initrd. Jul 2 00:21:52.163773 systemd[1]: No hostname configured, using default hostname. Jul 2 00:21:52.163795 systemd[1]: Hostname set to . Jul 2 00:21:52.163812 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:21:52.163830 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:21:52.163847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:52.163865 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:52.163884 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 2 00:21:52.167145 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:21:52.167342 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 2 00:21:52.167382 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 2 00:21:52.167403 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 2 00:21:52.167421 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 2 00:21:52.167439 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:52.167457 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:52.167474 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:21:52.167492 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:21:52.167513 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:21:52.167530 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:21:52.167552 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:21:52.167778 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:21:52.167923 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:21:52.167946 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:21:52.167964 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:52.168018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:52.168038 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:52.168096 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:21:52.168116 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 2 00:21:52.168135 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:21:52.168187 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 00:21:52.168206 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 00:21:52.168225 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:21:52.168355 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:21:52.168381 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:21:52.168399 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:52.168452 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 2 00:21:52.168472 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:52.168524 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:21:52.172927 systemd-journald[178]: Collecting audit messages is disabled. Jul 2 00:21:52.173005 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:21:52.173038 systemd-journald[178]: Journal started Jul 2 00:21:52.173074 systemd-journald[178]: Runtime Journal (/run/log/journal/ec24bec6e2e86f0089bb71890fd37cd4) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:21:52.117430 systemd-modules-load[179]: Inserted module 'overlay' Jul 2 00:21:52.274272 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:21:52.274315 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:21:52.274349 kernel: Bridge firewalling registered Jul 2 00:21:52.200046 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 2 00:21:52.286426 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:52.289638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:52.307185 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:52.311832 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:21:52.316221 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:21:52.318266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:21:52.339675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:21:52.359853 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:52.371006 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:52.385425 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:52.388783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:52.394072 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 00:21:52.404081 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:21:52.441923 dracut-cmdline[211]: dracut-dracut-053 Jul 2 00:21:52.447700 dracut-cmdline[211]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7cbbc16c4aaa626caa51ed60a6754ae638f7b2b87370c3f4fc6a9772b7874a8b Jul 2 00:21:52.485391 systemd-resolved[214]: Positive Trust Anchors: Jul 2 00:21:52.485415 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:21:52.485485 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:21:52.499170 systemd-resolved[214]: Defaulting to hostname 'linux'. Jul 2 00:21:52.502058 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:21:52.503300 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:52.572936 kernel: SCSI subsystem initialized Jul 2 00:21:52.590938 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:21:52.614086 kernel: iscsi: registered transport (tcp) Jul 2 00:21:52.657929 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:21:52.658008 kernel: QLogic iSCSI HBA Driver Jul 2 00:21:52.710035 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 00:21:52.716098 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 00:21:52.791933 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:21:52.792015 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:21:52.792035 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 2 00:21:52.848939 kernel: raid6: avx512x4 gen() 11979 MB/s Jul 2 00:21:52.865948 kernel: raid6: avx512x2 gen() 5242 MB/s Jul 2 00:21:52.884262 kernel: raid6: avx512x1 gen() 8359 MB/s Jul 2 00:21:52.901989 kernel: raid6: avx2x4 gen() 10527 MB/s Jul 2 00:21:52.918985 kernel: raid6: avx2x2 gen() 14157 MB/s Jul 2 00:21:52.936077 kernel: raid6: avx2x1 gen() 7882 MB/s Jul 2 00:21:52.936159 kernel: raid6: using algorithm avx2x2 gen() 14157 MB/s Jul 2 00:21:52.953925 kernel: raid6: .... xor() 15102 MB/s, rmw enabled Jul 2 00:21:52.954006 kernel: raid6: using avx512x2 recovery algorithm Jul 2 00:21:52.984919 kernel: xor: automatically using best checksumming function avx Jul 2 00:21:53.242961 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 2 00:21:53.255970 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:21:53.261194 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:53.282001 systemd-udevd[397]: Using default interface naming scheme 'v255'. Jul 2 00:21:53.287700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:53.297219 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 00:21:53.328277 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Jul 2 00:21:53.366239 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:21:53.372172 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:21:53.440663 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:53.452222 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 00:21:53.496456 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 00:21:53.507543 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:21:53.518345 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:53.520039 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:21:53.532481 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 00:21:53.587630 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:21:53.595954 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:21:53.630589 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:21:53.631186 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Jul 2 00:21:53.634076 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:24:04:8d:e5:bf Jul 2 00:21:53.638037 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:21:53.657532 (udev-worker)[451]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:21:53.658207 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:21:53.658427 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:53.664062 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:53.672806 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:21:53.673090 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jul 2 00:21:53.667692 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:53.668011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:53.685057 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:21:53.671325 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:53.692362 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:21:53.698544 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:21:53.698619 kernel: GPT:9289727 != 16777215 Jul 2 00:21:53.698652 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:21:53.698670 kernel: GPT:9289727 != 16777215 Jul 2 00:21:53.698686 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:21:53.698703 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:21:53.744508 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 00:21:53.744581 kernel: AES CTR mode by8 optimization enabled Jul 2 00:21:53.882923 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (459) Jul 2 00:21:53.921924 kernel: BTRFS: device fsid 2fd636b8-f582-46f8-bde2-15e56e3958c1 devid 1 transid 35 /dev/nvme0n1p3 scanned by (udev-worker) (448) Jul 2 00:21:53.934485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:53.946196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 00:21:54.013933 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jul 2 00:21:54.020103 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:54.049458 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jul 2 00:21:54.065948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:21:54.073517 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jul 2 00:21:54.073663 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jul 2 00:21:54.084069 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 00:21:54.094765 disk-uuid[631]: Primary Header is updated. Jul 2 00:21:54.094765 disk-uuid[631]: Secondary Entries is updated. Jul 2 00:21:54.094765 disk-uuid[631]: Secondary Header is updated. Jul 2 00:21:54.100932 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:21:54.108932 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:21:54.115923 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:21:55.128100 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:21:55.131607 disk-uuid[632]: The operation has completed successfully. Jul 2 00:21:55.332201 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:21:55.332329 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 00:21:55.359036 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 00:21:55.374478 sh[975]: Success Jul 2 00:21:55.402983 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 00:21:55.524994 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 00:21:55.543254 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 00:21:55.549240 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 00:21:55.579033 kernel: BTRFS info (device dm-0): first mount of filesystem 2fd636b8-f582-46f8-bde2-15e56e3958c1 Jul 2 00:21:55.579233 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:55.583464 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 00:21:55.583553 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 00:21:55.583574 kernel: BTRFS info (device dm-0): using free space tree Jul 2 00:21:55.701931 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jul 2 00:21:55.734090 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 00:21:55.735006 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 00:21:55.742110 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 00:21:55.747023 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 00:21:55.819862 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:55.820007 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:55.820030 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:21:55.827012 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:21:55.844934 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:55.845558 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:21:55.877640 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 00:21:55.885425 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 00:21:55.940507 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:21:55.950810 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:21:55.995860 systemd-networkd[1167]: lo: Link UP Jul 2 00:21:55.995870 systemd-networkd[1167]: lo: Gained carrier Jul 2 00:21:55.997693 systemd-networkd[1167]: Enumeration completed Jul 2 00:21:55.998146 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:55.998151 systemd-networkd[1167]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:21:56.000833 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:21:56.003070 systemd[1]: Reached target network.target - Network. Jul 2 00:21:56.009144 systemd-networkd[1167]: eth0: Link UP Jul 2 00:21:56.009151 systemd-networkd[1167]: eth0: Gained carrier Jul 2 00:21:56.009164 systemd-networkd[1167]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:21:56.026236 systemd-networkd[1167]: eth0: DHCPv4 address 172.31.26.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:21:56.399057 ignition[1124]: Ignition 2.18.0 Jul 2 00:21:56.399072 ignition[1124]: Stage: fetch-offline Jul 2 00:21:56.399351 ignition[1124]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:56.399364 ignition[1124]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:21:56.401280 ignition[1124]: Ignition finished successfully Jul 2 00:21:56.406110 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:21:56.418331 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 00:21:56.454079 ignition[1177]: Ignition 2.18.0 Jul 2 00:21:56.454094 ignition[1177]: Stage: fetch Jul 2 00:21:56.455283 ignition[1177]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:56.455297 ignition[1177]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:21:56.455508 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:21:56.466714 ignition[1177]: PUT result: OK Jul 2 00:21:56.470140 ignition[1177]: parsed url from cmdline: "" Jul 2 00:21:56.470151 ignition[1177]: no config URL provided Jul 2 00:21:56.470203 ignition[1177]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:21:56.470219 ignition[1177]: no config at "/usr/lib/ignition/user.ign" Jul 2 00:21:56.470242 ignition[1177]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:21:56.473210 ignition[1177]: PUT result: OK Jul 2 00:21:56.474296 ignition[1177]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:21:56.476189 ignition[1177]: GET result: OK Jul 2 00:21:56.477368 ignition[1177]: parsing config with SHA512: 40c59c489676e2758616c7dbca5a20833846d824c0fb74c3bef9587dfd6b33a6ccf3b4c9f699f1e94a50a62c0091baeab3bb62bd3b2682f62a6b43d73b03c5f9 Jul 2 00:21:56.483250 unknown[1177]: fetched base config from "system" Jul 2 00:21:56.483266 unknown[1177]: fetched base config from "system" Jul 2 00:21:56.483272 unknown[1177]: fetched user config from "aws" Jul 2 00:21:56.484618 ignition[1177]: fetch: fetch complete Jul 2 00:21:56.484624 ignition[1177]: fetch: fetch passed Jul 2 00:21:56.484678 ignition[1177]: Ignition finished successfully Jul 2 00:21:56.488092 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 00:21:56.496060 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 00:21:56.556211 ignition[1185]: Ignition 2.18.0 Jul 2 00:21:56.556227 ignition[1185]: Stage: kargs Jul 2 00:21:56.558576 ignition[1185]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:56.558598 ignition[1185]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:21:56.558724 ignition[1185]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:21:56.563827 ignition[1185]: PUT result: OK Jul 2 00:21:56.570105 ignition[1185]: kargs: kargs passed Jul 2 00:21:56.570200 ignition[1185]: Ignition finished successfully Jul 2 00:21:56.571989 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 00:21:56.586189 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 00:21:56.605228 ignition[1192]: Ignition 2.18.0 Jul 2 00:21:56.605243 ignition[1192]: Stage: disks Jul 2 00:21:56.605798 ignition[1192]: no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:56.605815 ignition[1192]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:21:56.606075 ignition[1192]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:21:56.607623 ignition[1192]: PUT result: OK Jul 2 00:21:56.614764 ignition[1192]: disks: disks passed Jul 2 00:21:56.614851 ignition[1192]: Ignition finished successfully Jul 2 00:21:56.622664 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 00:21:56.626684 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 00:21:56.629882 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:21:56.636243 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:21:56.639994 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:21:56.643316 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:21:56.652304 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 00:21:56.702411 systemd-fsck[1201]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 2 00:21:56.709122 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 00:21:56.719118 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 00:21:56.874921 kernel: EXT4-fs (nvme0n1p9): mounted filesystem c5a17c06-b440-4aab-a0fa-5b60bb1d8586 r/w with ordered data mode. Quota mode: none. Jul 2 00:21:56.875424 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 00:21:56.877711 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 00:21:56.896087 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:56.912189 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 00:21:56.914526 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 2 00:21:56.914595 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:21:56.914630 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:56.939092 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1220) Jul 2 00:21:56.944280 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:56.944386 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:56.944410 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:21:56.944724 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 00:21:56.954314 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 00:21:56.959937 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:21:56.961981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:57.472025 initrd-setup-root[1244]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:21:57.501128 initrd-setup-root[1251]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:21:57.509138 initrd-setup-root[1258]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:21:57.515831 initrd-setup-root[1265]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:21:57.918674 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:57.929196 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 00:21:57.936472 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 00:21:57.942280 systemd-networkd[1167]: eth0: Gained IPv6LL Jul 2 00:21:57.950337 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 00:21:57.952551 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:58.008753 ignition[1333]: INFO : Ignition 2.18.0 Jul 2 00:21:58.008753 ignition[1333]: INFO : Stage: mount Jul 2 00:21:58.008753 ignition[1333]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:58.008753 ignition[1333]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:21:58.008753 ignition[1333]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:21:58.024139 ignition[1333]: INFO : PUT result: OK Jul 2 00:21:58.024139 ignition[1333]: INFO : mount: mount passed Jul 2 00:21:58.024139 ignition[1333]: INFO : Ignition finished successfully Jul 2 00:21:58.028646 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 00:21:58.032357 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 00:21:58.045049 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 00:21:58.064361 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 00:21:58.096959 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1345) Jul 2 00:21:58.099611 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e2db191f-38b3-4d65-844a-7255916ec346 Jul 2 00:21:58.099880 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Jul 2 00:21:58.099923 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:21:58.107097 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:21:58.114149 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 00:21:58.159882 ignition[1362]: INFO : Ignition 2.18.0 Jul 2 00:21:58.159882 ignition[1362]: INFO : Stage: files Jul 2 00:21:58.162000 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:58.162000 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:21:58.164450 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:21:58.166651 ignition[1362]: INFO : PUT result: OK Jul 2 00:21:58.169538 ignition[1362]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:21:58.183379 ignition[1362]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:21:58.183379 ignition[1362]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:21:58.226967 ignition[1362]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:21:58.231343 ignition[1362]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:21:58.235204 unknown[1362]: wrote ssh authorized keys file for user: core Jul 2 00:21:58.237155 ignition[1362]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:21:58.240053 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:21:58.242188 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jul 2 00:21:58.242188 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:58.247018 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:21:58.247018 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:58.247018 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:21:58.247018 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:58.247018 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:58.247018 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:58.247018 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Jul 2 00:21:58.565418 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jul 2 00:21:58.936115 ignition[1362]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Jul 2 00:21:58.936115 ignition[1362]: INFO : files: op(8): [started] processing unit "containerd.service" Jul 2 00:21:58.940732 ignition[1362]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:21:58.943705 ignition[1362]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jul 2 00:21:58.943705 ignition[1362]: INFO : files: op(8): [finished] processing unit "containerd.service" Jul 2 00:21:58.948563 ignition[1362]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:58.951023 ignition[1362]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:21:58.953293 ignition[1362]: INFO : files: files passed Jul 2 00:21:58.953293 ignition[1362]: INFO : Ignition finished successfully Jul 2 00:21:58.956818 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 00:21:58.964163 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 00:21:58.975666 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 00:21:58.985074 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:21:58.985232 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 00:21:59.009435 initrd-setup-root-after-ignition[1392]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:59.016243 initrd-setup-root-after-ignition[1396]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:59.019284 initrd-setup-root-after-ignition[1392]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:21:59.023087 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:59.023581 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 00:21:59.037499 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 00:21:59.155033 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:21:59.155174 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 00:21:59.160184 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 00:21:59.162804 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 00:21:59.165352 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 00:21:59.173137 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 00:21:59.197302 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:59.205278 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 00:21:59.235355 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:21:59.238111 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:21:59.240082 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 00:21:59.243641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:21:59.243867 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 00:21:59.250417 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 00:21:59.260930 systemd[1]: Stopped target basic.target - Basic System. Jul 2 00:21:59.262485 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 00:21:59.266514 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 00:21:59.266735 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 00:21:59.272012 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 00:21:59.272342 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 00:21:59.276685 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 00:21:59.278094 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 00:21:59.279331 systemd[1]: Stopped target swap.target - Swaps. Jul 2 00:21:59.283826 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:21:59.284014 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 00:21:59.286060 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:21:59.288492 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:21:59.293310 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 2 00:21:59.294380 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:21:59.297179 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:21:59.297482 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 00:21:59.301303 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:21:59.301676 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 00:21:59.305696 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:21:59.306440 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 00:21:59.318475 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 00:21:59.323159 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 00:21:59.325486 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:21:59.325780 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:21:59.328394 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:21:59.328690 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 00:21:59.336840 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:21:59.337015 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 00:21:59.364042 ignition[1416]: INFO : Ignition 2.18.0 Jul 2 00:21:59.364042 ignition[1416]: INFO : Stage: umount Jul 2 00:21:59.366511 ignition[1416]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 00:21:59.366511 ignition[1416]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:21:59.366511 ignition[1416]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:21:59.377353 ignition[1416]: INFO : PUT result: OK Jul 2 00:21:59.386153 ignition[1416]: INFO : umount: umount passed Jul 2 00:21:59.386153 ignition[1416]: INFO : Ignition finished successfully Jul 2 00:21:59.388462 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:21:59.388600 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 00:21:59.398551 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:21:59.398888 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 00:21:59.403763 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:21:59.403862 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 00:21:59.406980 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:21:59.407052 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 00:21:59.410103 systemd[1]: Stopped target network.target - Network. Jul 2 00:21:59.411389 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:21:59.411459 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 00:21:59.412722 systemd[1]: Stopped target paths.target - Path Units. Jul 2 00:21:59.413772 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:21:59.415412 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:21:59.417392 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 00:21:59.418629 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 00:21:59.420688 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:21:59.420881 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 00:21:59.422117 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:21:59.422218 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 00:21:59.424076 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:21:59.424132 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 00:21:59.427359 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 2 00:21:59.427418 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 2 00:21:59.440754 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 00:21:59.455966 systemd-networkd[1167]: eth0: DHCPv6 lease lost Jul 2 00:21:59.456051 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 00:21:59.464059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:21:59.467179 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:21:59.467611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 00:21:59.470615 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:21:59.470745 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 00:21:59.491765 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:21:59.491975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 00:21:59.519069 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:21:59.519147 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:21:59.521388 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:21:59.521473 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 00:21:59.539121 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 00:21:59.542706 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:21:59.542963 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 00:21:59.545585 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:21:59.545666 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:21:59.554048 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:21:59.554861 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 00:21:59.557085 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 00:21:59.557168 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:21:59.566394 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:21:59.636431 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:21:59.636815 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:21:59.641581 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:21:59.641766 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 00:21:59.646169 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:21:59.646230 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:21:59.649625 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:21:59.649718 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 00:21:59.656089 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:21:59.656161 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 00:21:59.664690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:21:59.664767 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 00:21:59.680303 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 00:21:59.684268 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:21:59.685009 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:21:59.687905 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 2 00:21:59.687983 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:21:59.690014 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:21:59.690090 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:21:59.696409 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:21:59.696495 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:21:59.709300 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:21:59.709434 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 00:21:59.750666 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:21:59.750813 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 00:21:59.763999 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 00:21:59.777103 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 00:21:59.839445 systemd[1]: Switching root. Jul 2 00:21:59.881094 systemd-journald[178]: Journal stopped Jul 2 00:22:03.286230 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 2 00:22:03.286337 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:22:03.286361 kernel: SELinux: policy capability open_perms=1 Jul 2 00:22:03.286381 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:22:03.286401 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:22:03.286420 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:22:03.286441 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:22:03.286466 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:22:03.286487 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:22:03.286510 kernel: audit: type=1403 audit(1719879721.442:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:22:03.286531 systemd[1]: Successfully loaded SELinux policy in 66.067ms. Jul 2 00:22:03.286559 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 76.128ms. Jul 2 00:22:03.286583 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 2 00:22:03.286605 systemd[1]: Detected virtualization amazon. Jul 2 00:22:03.286626 systemd[1]: Detected architecture x86-64. Jul 2 00:22:03.286648 systemd[1]: Detected first boot. Jul 2 00:22:03.286672 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:22:03.286698 zram_generator::config[1475]: No configuration found. Jul 2 00:22:03.286724 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:22:03.286745 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:22:03.286767 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jul 2 00:22:03.286791 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 00:22:03.286812 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 00:22:03.286833 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 00:22:03.286854 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 00:22:03.286876 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 00:22:03.298957 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 00:22:03.298997 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 00:22:03.299017 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 00:22:03.299039 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 2 00:22:03.299068 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 00:22:03.299088 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 00:22:03.299113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 00:22:03.299133 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 00:22:03.299158 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 2 00:22:03.299178 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 2 00:22:03.299197 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 2 00:22:03.299217 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 00:22:03.299236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 00:22:03.299256 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 00:22:03.299276 systemd[1]: Reached target slices.target - Slice Units. Jul 2 00:22:03.299295 systemd[1]: Reached target swap.target - Swaps. Jul 2 00:22:03.299315 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 00:22:03.299338 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 00:22:03.299358 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 00:22:03.299378 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 00:22:03.299397 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 00:22:03.299417 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 00:22:03.299437 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 00:22:03.299456 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 00:22:03.299475 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 00:22:03.299494 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 00:22:03.299516 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 00:22:03.299536 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:22:03.299556 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 00:22:03.299575 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 00:22:03.299594 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 00:22:03.299613 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 00:22:03.299632 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:22:03.299652 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 00:22:03.299674 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 00:22:03.299694 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:22:03.299713 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:22:03.299733 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:22:03.299753 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 00:22:03.299772 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:22:03.299792 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:22:03.299812 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jul 2 00:22:03.299832 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jul 2 00:22:03.299854 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 00:22:03.299873 kernel: loop: module loaded Jul 2 00:22:03.306409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 00:22:03.306462 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 00:22:03.306970 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 00:22:03.306998 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 00:22:03.307021 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:22:03.307043 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 00:22:03.307073 kernel: fuse: init (API version 7.39) Jul 2 00:22:03.307096 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 00:22:03.307118 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 00:22:03.307139 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 00:22:03.307160 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 00:22:03.307181 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 00:22:03.307203 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 00:22:03.307226 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 00:22:03.307248 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:22:03.307313 systemd-journald[1575]: Collecting audit messages is disabled. Jul 2 00:22:03.307357 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 00:22:03.307379 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:22:03.307401 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:22:03.307422 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:22:03.307447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:22:03.307469 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:22:03.307491 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 00:22:03.307512 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:22:03.307534 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:22:03.307555 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 00:22:03.307577 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 00:22:03.307601 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 00:22:03.307623 systemd-journald[1575]: Journal started Jul 2 00:22:03.307664 systemd-journald[1575]: Runtime Journal (/run/log/journal/ec24bec6e2e86f0089bb71890fd37cd4) is 4.8M, max 38.6M, 33.7M free. Jul 2 00:22:03.316966 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 00:22:03.310714 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 00:22:03.346500 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 00:22:03.353199 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 00:22:03.354951 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:22:03.379065 kernel: ACPI: bus type drm_connector registered Jul 2 00:22:03.375104 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 00:22:03.412468 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 00:22:03.414025 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:22:03.430257 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 2 00:22:03.433763 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:22:03.442239 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 00:22:03.460047 systemd-journald[1575]: Time spent on flushing to /var/log/journal/ec24bec6e2e86f0089bb71890fd37cd4 is 104.881ms for 927 entries. Jul 2 00:22:03.460047 systemd-journald[1575]: System Journal (/var/log/journal/ec24bec6e2e86f0089bb71890fd37cd4) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:22:03.586179 systemd-journald[1575]: Received client request to flush runtime journal. Jul 2 00:22:03.454374 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 2 00:22:03.471705 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:22:03.473306 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:22:03.475260 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 00:22:03.476735 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 00:22:03.517414 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 2 00:22:03.519874 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 00:22:03.549825 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 00:22:03.564519 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 00:22:03.580527 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 00:22:03.591300 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 00:22:03.612579 udevadm[1635]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 00:22:03.622435 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Jul 2 00:22:03.622460 systemd-tmpfiles[1607]: ACLs are not supported, ignoring. Jul 2 00:22:03.629411 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 2 00:22:03.648344 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 00:22:03.710550 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 00:22:03.727111 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 00:22:03.757260 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Jul 2 00:22:03.757766 systemd-tmpfiles[1647]: ACLs are not supported, ignoring. Jul 2 00:22:03.766393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 00:22:04.600602 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 00:22:04.611168 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 00:22:04.695572 systemd-udevd[1653]: Using default interface naming scheme 'v255'. Jul 2 00:22:04.767236 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 00:22:04.778103 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 00:22:04.834167 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 00:22:04.857567 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jul 2 00:22:04.945159 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1658) Jul 2 00:22:04.948442 (udev-worker)[1665]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:22:04.977703 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 00:22:05.049924 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 00:22:05.090574 kernel: ACPI: button: Power Button [PWRF] Jul 2 00:22:05.090717 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 00:22:05.090832 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 2 00:22:05.097976 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Jul 2 00:22:05.126393 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 00:22:05.160772 systemd-networkd[1655]: lo: Link UP Jul 2 00:22:05.161799 systemd-networkd[1655]: lo: Gained carrier Jul 2 00:22:05.164058 systemd-networkd[1655]: Enumeration completed Jul 2 00:22:05.167084 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:22:05.167191 systemd-networkd[1655]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:22:05.167210 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 00:22:05.174212 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 00:22:05.178287 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:22:05.178561 systemd-networkd[1655]: eth0: Link UP Jul 2 00:22:05.179123 systemd-networkd[1655]: eth0: Gained carrier Jul 2 00:22:05.179703 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 00:22:05.184957 systemd-networkd[1655]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 00:22:05.197150 systemd-networkd[1655]: eth0: DHCPv4 address 172.31.26.26/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:22:05.221328 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 2 00:22:05.281944 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (1666) Jul 2 00:22:05.453716 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 00:22:05.476333 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jul 2 00:22:05.586218 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 00:22:05.589007 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 2 00:22:05.613657 lvm[1775]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:22:05.643105 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 00:22:05.644966 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 00:22:05.655175 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 00:22:05.662479 lvm[1780]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:22:05.692554 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 00:22:05.695240 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 00:22:05.697784 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:22:05.697818 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 00:22:05.699075 systemd[1]: Reached target machines.target - Containers. Jul 2 00:22:05.705684 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 2 00:22:05.713156 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 2 00:22:05.720609 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 00:22:05.721995 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:22:05.727168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 00:22:05.736192 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 00:22:05.746154 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 00:22:05.749638 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 2 00:22:05.778868 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 00:22:05.791929 kernel: loop0: detected capacity change from 0 to 139904 Jul 2 00:22:05.792037 kernel: block loop0: the capability attribute has been deprecated. Jul 2 00:22:05.818496 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:22:05.819706 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 00:22:05.949015 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:22:05.976290 kernel: loop1: detected capacity change from 0 to 209816 Jul 2 00:22:06.027925 kernel: loop2: detected capacity change from 0 to 80568 Jul 2 00:22:06.148925 kernel: loop3: detected capacity change from 0 to 60984 Jul 2 00:22:06.286926 kernel: loop4: detected capacity change from 0 to 139904 Jul 2 00:22:06.308638 kernel: loop5: detected capacity change from 0 to 209816 Jul 2 00:22:06.339950 kernel: loop6: detected capacity change from 0 to 80568 Jul 2 00:22:06.370959 kernel: loop7: detected capacity change from 0 to 60984 Jul 2 00:22:06.388438 (sd-merge)[1802]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jul 2 00:22:06.390345 (sd-merge)[1802]: Merged extensions into '/usr'. Jul 2 00:22:06.406180 systemd[1]: Reloading requested from client PID 1788 ('systemd-sysext') (unit systemd-sysext.service)... Jul 2 00:22:06.406204 systemd[1]: Reloading... Jul 2 00:22:06.543920 zram_generator::config[1828]: No configuration found. Jul 2 00:22:06.792026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:22:06.911596 systemd[1]: Reloading finished in 503 ms. Jul 2 00:22:06.950840 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 00:22:06.959758 systemd[1]: Starting ensure-sysext.service... Jul 2 00:22:06.972198 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 00:22:06.989628 systemd[1]: Reloading requested from client PID 1882 ('systemctl') (unit ensure-sysext.service)... Jul 2 00:22:06.989816 systemd[1]: Reloading... Jul 2 00:22:07.010638 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:22:07.011156 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 00:22:07.013622 systemd-tmpfiles[1886]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:22:07.014459 systemd-tmpfiles[1886]: ACLs are not supported, ignoring. Jul 2 00:22:07.014801 systemd-tmpfiles[1886]: ACLs are not supported, ignoring. Jul 2 00:22:07.021631 systemd-tmpfiles[1886]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:22:07.021806 systemd-tmpfiles[1886]: Skipping /boot Jul 2 00:22:07.040992 systemd-tmpfiles[1886]: Detected autofs mount point /boot during canonicalization of boot. Jul 2 00:22:07.041148 systemd-tmpfiles[1886]: Skipping /boot Jul 2 00:22:07.142926 zram_generator::config[1915]: No configuration found. Jul 2 00:22:07.156146 systemd-networkd[1655]: eth0: Gained IPv6LL Jul 2 00:22:07.211928 ldconfig[1784]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:22:07.354518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:22:07.448160 systemd[1]: Reloading finished in 457 ms. Jul 2 00:22:07.469328 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 00:22:07.471635 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 00:22:07.479784 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 00:22:07.504148 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:22:07.514222 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 00:22:07.516818 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 00:22:07.530129 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 00:22:07.533397 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 00:22:07.563269 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:22:07.563574 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:22:07.567762 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 00:22:07.575711 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 00:22:07.588192 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 00:22:07.589692 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:22:07.589917 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:22:07.600795 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:22:07.601314 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 00:22:07.613940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 00:22:07.615315 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 00:22:07.615589 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 00:22:07.617384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 00:22:07.627017 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:22:07.627279 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 00:22:07.642465 systemd[1]: Finished ensure-sysext.service. Jul 2 00:22:07.649498 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:22:07.666603 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 00:22:07.670613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:22:07.670854 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 00:22:07.677853 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:22:07.678117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 00:22:07.686394 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:22:07.691309 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 00:22:07.698061 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 00:22:07.709235 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 00:22:07.727050 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 00:22:07.740981 augenrules[2016]: No rules Jul 2 00:22:07.745684 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:22:07.786360 systemd-resolved[1980]: Positive Trust Anchors: Jul 2 00:22:07.786376 systemd-resolved[1980]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:22:07.786434 systemd-resolved[1980]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Jul 2 00:22:07.794534 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 00:22:07.807008 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 00:22:07.809069 systemd-resolved[1980]: Defaulting to hostname 'linux'. Jul 2 00:22:07.812623 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:22:07.814409 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 00:22:07.817166 systemd[1]: Reached target network.target - Network. Jul 2 00:22:07.818267 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 00:22:07.821031 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 00:22:07.822319 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 00:22:07.823765 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 00:22:07.825261 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 00:22:07.826767 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 00:22:07.828972 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 00:22:07.830844 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 00:22:07.833054 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:22:07.833089 systemd[1]: Reached target paths.target - Path Units. Jul 2 00:22:07.835292 systemd[1]: Reached target timers.target - Timer Units. Jul 2 00:22:07.837637 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 00:22:07.843762 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 00:22:07.846527 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 00:22:07.849865 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 00:22:07.854325 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 00:22:07.856210 systemd[1]: Reached target basic.target - Basic System. Jul 2 00:22:07.859025 systemd[1]: System is tainted: cgroupsv1 Jul 2 00:22:07.859276 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:22:07.859313 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 00:22:07.869061 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 00:22:07.882292 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 00:22:07.887180 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 00:22:07.934106 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 00:22:07.954144 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 00:22:07.955521 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 00:22:07.960111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:07.970977 jq[2033]: false Jul 2 00:22:07.980188 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 00:22:07.988117 systemd[1]: Started ntpd.service - Network Time Service. Jul 2 00:22:08.007441 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 00:22:08.045037 systemd[1]: Starting setup-oem.service - Setup OEM... Jul 2 00:22:08.063094 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 00:22:08.068146 extend-filesystems[2034]: Found loop4 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found loop5 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found loop6 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found loop7 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found nvme0n1 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found nvme0n1p1 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found nvme0n1p2 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found nvme0n1p3 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found usr Jul 2 00:22:08.068146 extend-filesystems[2034]: Found nvme0n1p4 Jul 2 00:22:08.068146 extend-filesystems[2034]: Found nvme0n1p6 Jul 2 00:22:08.120058 extend-filesystems[2034]: Found nvme0n1p7 Jul 2 00:22:08.120058 extend-filesystems[2034]: Found nvme0n1p9 Jul 2 00:22:08.120058 extend-filesystems[2034]: Checking size of /dev/nvme0n1p9 Jul 2 00:22:08.104012 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 00:22:08.133137 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 00:22:08.134748 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:22:08.146758 dbus-daemon[2032]: [system] SELinux support is enabled Jul 2 00:22:08.160372 extend-filesystems[2034]: Resized partition /dev/nvme0n1p9 Jul 2 00:22:08.163119 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:22:08.150182 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 00:22:08.163405 extend-filesystems[2065]: resize2fs 1.47.0 (5-Feb-2023) Jul 2 00:22:08.170863 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 00:22:08.181907 dbus-daemon[2032]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1655 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:22:08.184846 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 00:22:08.224619 ntpd[2038]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:22:08.230378 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: ntpd 4.2.8p17@1.4004-o Mon Jul 1 22:10:01 UTC 2024 (1): Starting Jul 2 00:22:08.230378 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:22:08.230378 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: ---------------------------------------------------- Jul 2 00:22:08.230378 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:22:08.230378 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:22:08.230378 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: corporation. Support and training for ntp-4 are Jul 2 00:22:08.230378 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: available at https://www.nwtime.org/support Jul 2 00:22:08.230378 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: ---------------------------------------------------- Jul 2 00:22:08.224654 ntpd[2038]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jul 2 00:22:08.260423 jq[2067]: true Jul 2 00:22:08.225973 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: proto: precision = 0.094 usec (-23) Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: basedate set to 2024-06-19 Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: gps base set to 2024-06-23 (week 2320) Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Listen normally on 3 eth0 172.31.26.26:123 Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Listen normally on 4 lo [::1]:123 Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Listen normally on 5 eth0 [fe80::424:4ff:fe8d:e5bf%2]:123 Jul 2 00:22:08.267995 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: Listening on routing socket on fd #22 for interface updates Jul 2 00:22:08.224666 ntpd[2038]: ---------------------------------------------------- Jul 2 00:22:08.226322 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 00:22:08.224676 ntpd[2038]: ntp-4 is maintained by Network Time Foundation, Jul 2 00:22:08.228331 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:22:08.224686 ntpd[2038]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jul 2 00:22:08.228668 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 00:22:08.224698 ntpd[2038]: corporation. Support and training for ntp-4 are Jul 2 00:22:08.243642 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:22:08.224708 ntpd[2038]: available at https://www.nwtime.org/support Jul 2 00:22:08.248050 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 00:22:08.224720 ntpd[2038]: ---------------------------------------------------- Jul 2 00:22:08.251148 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 00:22:08.233421 ntpd[2038]: proto: precision = 0.094 usec (-23) Jul 2 00:22:08.237207 ntpd[2038]: basedate set to 2024-06-19 Jul 2 00:22:08.278938 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:22:08.237229 ntpd[2038]: gps base set to 2024-06-23 (week 2320) Jul 2 00:22:08.263049 ntpd[2038]: Listen and drop on 0 v6wildcard [::]:123 Jul 2 00:22:08.338078 coreos-metadata[2030]: Jul 02 00:22:08.325 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:22:08.338078 coreos-metadata[2030]: Jul 02 00:22:08.333 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jul 2 00:22:08.338078 coreos-metadata[2030]: Jul 02 00:22:08.336 INFO Fetch successful Jul 2 00:22:08.338078 coreos-metadata[2030]: Jul 02 00:22:08.337 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jul 2 00:22:08.347682 update_engine[2063]: I0702 00:22:08.296818 2063 main.cc:92] Flatcar Update Engine starting Jul 2 00:22:08.347682 update_engine[2063]: I0702 00:22:08.299579 2063 update_check_scheduler.cc:74] Next update check in 8m58s Jul 2 00:22:08.316382 (ntainerd)[2082]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 2 00:22:08.348294 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:22:08.348294 ntpd[2038]: 2 Jul 00:22:08 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:22:08.263106 ntpd[2038]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jul 2 00:22:08.348421 coreos-metadata[2030]: Jul 02 00:22:08.347 INFO Fetch successful Jul 2 00:22:08.348421 coreos-metadata[2030]: Jul 02 00:22:08.347 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jul 2 00:22:08.348514 jq[2080]: true Jul 2 00:22:08.329342 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:22:08.263279 ntpd[2038]: Listen normally on 2 lo 127.0.0.1:123 Jul 2 00:22:08.329404 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 00:22:08.263315 ntpd[2038]: Listen normally on 3 eth0 172.31.26.26:123 Jul 2 00:22:08.331661 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:22:08.360275 coreos-metadata[2030]: Jul 02 00:22:08.351 INFO Fetch successful Jul 2 00:22:08.360275 coreos-metadata[2030]: Jul 02 00:22:08.351 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jul 2 00:22:08.360275 coreos-metadata[2030]: Jul 02 00:22:08.351 INFO Fetch successful Jul 2 00:22:08.360275 coreos-metadata[2030]: Jul 02 00:22:08.351 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jul 2 00:22:08.360275 coreos-metadata[2030]: Jul 02 00:22:08.355 INFO Fetch failed with 404: resource not found Jul 2 00:22:08.360275 coreos-metadata[2030]: Jul 02 00:22:08.355 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jul 2 00:22:08.360275 coreos-metadata[2030]: Jul 02 00:22:08.358 INFO Fetch successful Jul 2 00:22:08.360275 coreos-metadata[2030]: Jul 02 00:22:08.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jul 2 00:22:08.360592 extend-filesystems[2065]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:22:08.360592 extend-filesystems[2065]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:22:08.360592 extend-filesystems[2065]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:22:08.263355 ntpd[2038]: Listen normally on 4 lo [::1]:123 Jul 2 00:22:08.331693 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 00:22:08.369858 coreos-metadata[2030]: Jul 02 00:22:08.362 INFO Fetch successful Jul 2 00:22:08.369858 coreos-metadata[2030]: Jul 02 00:22:08.362 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jul 2 00:22:08.369858 coreos-metadata[2030]: Jul 02 00:22:08.363 INFO Fetch successful Jul 2 00:22:08.369858 coreos-metadata[2030]: Jul 02 00:22:08.363 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jul 2 00:22:08.369858 coreos-metadata[2030]: Jul 02 00:22:08.366 INFO Fetch successful Jul 2 00:22:08.369858 coreos-metadata[2030]: Jul 02 00:22:08.366 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jul 2 00:22:08.369858 coreos-metadata[2030]: Jul 02 00:22:08.368 INFO Fetch successful Jul 2 00:22:08.371365 extend-filesystems[2034]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:22:08.263394 ntpd[2038]: Listen normally on 5 eth0 [fe80::424:4ff:fe8d:e5bf%2]:123 Jul 2 00:22:08.342994 systemd[1]: Started update-engine.service - Update Engine. Jul 2 00:22:08.263431 ntpd[2038]: Listening on routing socket on fd #22 for interface updates Jul 2 00:22:08.374000 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:22:08.300966 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:22:08.374329 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 00:22:08.301005 ntpd[2038]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jul 2 00:22:08.338980 dbus-daemon[2032]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 00:22:08.430231 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 2 00:22:08.433278 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:22:08.443076 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 00:22:08.495920 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (2136) Jul 2 00:22:08.534286 bash[2138]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:22:08.555614 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 00:22:08.560588 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 00:22:08.568510 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 00:22:08.579623 systemd[1]: Starting sshkeys.service... Jul 2 00:22:08.599947 systemd-logind[2057]: Watching system buttons on /dev/input/event1 (Power Button) Jul 2 00:22:08.599977 systemd-logind[2057]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 2 00:22:08.600003 systemd-logind[2057]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 00:22:08.602685 systemd-logind[2057]: New seat seat0. Jul 2 00:22:08.606548 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 00:22:08.623604 systemd[1]: Finished setup-oem.service - Setup OEM. Jul 2 00:22:08.638297 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jul 2 00:22:08.671787 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 2 00:22:08.683382 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 2 00:22:08.935129 dbus-daemon[2032]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:22:08.935999 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 2 00:22:08.940082 dbus-daemon[2032]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2125 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:22:08.948043 amazon-ssm-agent[2172]: Initializing new seelog logger Jul 2 00:22:08.948043 amazon-ssm-agent[2172]: New Seelog Logger Creation Complete Jul 2 00:22:08.948043 amazon-ssm-agent[2172]: 2024/07/02 00:22:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:22:08.948043 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:22:08.953094 systemd[1]: Starting polkit.service - Authorization Manager... Jul 2 00:22:08.962039 amazon-ssm-agent[2172]: 2024/07/02 00:22:08 processing appconfig overrides Jul 2 00:22:08.962039 amazon-ssm-agent[2172]: 2024/07/02 00:22:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:22:08.962039 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:22:08.962039 amazon-ssm-agent[2172]: 2024/07/02 00:22:08 processing appconfig overrides Jul 2 00:22:08.962039 amazon-ssm-agent[2172]: 2024/07/02 00:22:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:22:08.962039 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:22:08.962039 amazon-ssm-agent[2172]: 2024/07/02 00:22:08 processing appconfig overrides Jul 2 00:22:08.966989 coreos-metadata[2175]: Jul 02 00:22:08.966 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:22:08.968933 amazon-ssm-agent[2172]: 2024-07-02 00:22:08 INFO Proxy environment variables: Jul 2 00:22:08.975113 amazon-ssm-agent[2172]: 2024/07/02 00:22:08 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:22:08.975113 amazon-ssm-agent[2172]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:22:08.975113 amazon-ssm-agent[2172]: 2024/07/02 00:22:08 processing appconfig overrides Jul 2 00:22:08.977808 coreos-metadata[2175]: Jul 02 00:22:08.977 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jul 2 00:22:08.978625 coreos-metadata[2175]: Jul 02 00:22:08.978 INFO Fetch successful Jul 2 00:22:08.978625 coreos-metadata[2175]: Jul 02 00:22:08.978 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:22:08.983453 coreos-metadata[2175]: Jul 02 00:22:08.981 INFO Fetch successful Jul 2 00:22:08.994454 unknown[2175]: wrote ssh authorized keys file for user: core Jul 2 00:22:09.050448 polkitd[2235]: Started polkitd version 121 Jul 2 00:22:09.070518 update-ssh-keys[2254]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:22:09.071296 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 2 00:22:09.091995 amazon-ssm-agent[2172]: 2024-07-02 00:22:08 INFO no_proxy: Jul 2 00:22:09.121616 systemd[1]: Finished sshkeys.service. Jul 2 00:22:09.160203 polkitd[2235]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:22:09.160295 polkitd[2235]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:22:09.161583 polkitd[2235]: Finished loading, compiling and executing 2 rules Jul 2 00:22:09.173415 dbus-daemon[2032]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:22:09.210413 amazon-ssm-agent[2172]: 2024-07-02 00:22:08 INFO https_proxy: Jul 2 00:22:09.221027 systemd[1]: Started polkit.service - Authorization Manager. Jul 2 00:22:09.222141 polkitd[2235]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:22:09.260560 locksmithd[2126]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:22:09.311262 amazon-ssm-agent[2172]: 2024-07-02 00:22:08 INFO http_proxy: Jul 2 00:22:09.334428 systemd-hostnamed[2125]: Hostname set to (transient) Jul 2 00:22:09.335108 systemd-resolved[1980]: System hostname changed to 'ip-172-31-26-26'. Jul 2 00:22:09.428687 amazon-ssm-agent[2172]: 2024-07-02 00:22:08 INFO Checking if agent identity type OnPrem can be assumed Jul 2 00:22:09.444448 containerd[2082]: time="2024-07-02T00:22:09.444327999Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Jul 2 00:22:09.525962 amazon-ssm-agent[2172]: 2024-07-02 00:22:08 INFO Checking if agent identity type EC2 can be assumed Jul 2 00:22:09.560155 containerd[2082]: time="2024-07-02T00:22:09.560059209Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 00:22:09.560511 containerd[2082]: time="2024-07-02T00:22:09.560319442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.564657610Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.36-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.564704727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565046758Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565071933Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565174663Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565237284Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565254799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565335144Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565567274Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565589301Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:22:09.565962 containerd[2082]: time="2024-07-02T00:22:09.565604969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:22:09.566457 containerd[2082]: time="2024-07-02T00:22:09.565791508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:22:09.566457 containerd[2082]: time="2024-07-02T00:22:09.565812508Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:22:09.566457 containerd[2082]: time="2024-07-02T00:22:09.565909330Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:22:09.566457 containerd[2082]: time="2024-07-02T00:22:09.565927753Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578251590Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578308652Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578330999Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578389779Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578411519Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578472520Z" level=info msg="NRI interface is disabled by configuration." Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578492230Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578665408Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578687565Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578706720Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578728238Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578749644Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578782558Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:22:09.579241 containerd[2082]: time="2024-07-02T00:22:09.578802540Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:22:09.579810 containerd[2082]: time="2024-07-02T00:22:09.578821303Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:22:09.579810 containerd[2082]: time="2024-07-02T00:22:09.578843535Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:22:09.579810 containerd[2082]: time="2024-07-02T00:22:09.578863030Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:22:09.579810 containerd[2082]: time="2024-07-02T00:22:09.578980913Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:22:09.579810 containerd[2082]: time="2024-07-02T00:22:09.578999805Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:22:09.579810 containerd[2082]: time="2024-07-02T00:22:09.579134229Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582328875Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582386251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582409527Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582443742Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582521703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582540832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582561120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582579640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582598039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582616943Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582634911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582654094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582673311Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:22:09.584240 containerd[2082]: time="2024-07-02T00:22:09.582841059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584824 containerd[2082]: time="2024-07-02T00:22:09.582863942Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584824 containerd[2082]: time="2024-07-02T00:22:09.582958118Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584824 containerd[2082]: time="2024-07-02T00:22:09.582980661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584824 containerd[2082]: time="2024-07-02T00:22:09.583000792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584824 containerd[2082]: time="2024-07-02T00:22:09.583021164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584824 containerd[2082]: time="2024-07-02T00:22:09.583039963Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.584824 containerd[2082]: time="2024-07-02T00:22:09.583057045Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:22:09.585088 containerd[2082]: time="2024-07-02T00:22:09.583455705Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:22:09.585088 containerd[2082]: time="2024-07-02T00:22:09.583541927Z" level=info msg="Connect containerd service" Jul 2 00:22:09.585088 containerd[2082]: time="2024-07-02T00:22:09.583585842Z" level=info msg="using legacy CRI server" Jul 2 00:22:09.585088 containerd[2082]: time="2024-07-02T00:22:09.583595099Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 00:22:09.585088 containerd[2082]: time="2024-07-02T00:22:09.583713877Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:22:09.592987 containerd[2082]: time="2024-07-02T00:22:09.587858794Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:22:09.592987 containerd[2082]: time="2024-07-02T00:22:09.588667537Z" level=info msg="Start subscribing containerd event" Jul 2 00:22:09.592987 containerd[2082]: time="2024-07-02T00:22:09.592409262Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:22:09.592987 containerd[2082]: time="2024-07-02T00:22:09.592573703Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 2 00:22:09.592987 containerd[2082]: time="2024-07-02T00:22:09.592603850Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:22:09.592987 containerd[2082]: time="2024-07-02T00:22:09.592729675Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 2 00:22:09.594021 containerd[2082]: time="2024-07-02T00:22:09.592643124Z" level=info msg="Start recovering state" Jul 2 00:22:09.594248 containerd[2082]: time="2024-07-02T00:22:09.594231630Z" level=info msg="Start event monitor" Jul 2 00:22:09.594593 containerd[2082]: time="2024-07-02T00:22:09.594569933Z" level=info msg="Start snapshots syncer" Jul 2 00:22:09.594726 containerd[2082]: time="2024-07-02T00:22:09.594708524Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:22:09.594814 containerd[2082]: time="2024-07-02T00:22:09.594795279Z" level=info msg="Start streaming server" Jul 2 00:22:09.602553 containerd[2082]: time="2024-07-02T00:22:09.602392270Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:22:09.602553 containerd[2082]: time="2024-07-02T00:22:09.602492893Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:22:09.605906 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 00:22:09.619179 containerd[2082]: time="2024-07-02T00:22:09.615973351Z" level=info msg="containerd successfully booted in 0.174927s" Jul 2 00:22:09.625407 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO Agent will take identity from EC2 Jul 2 00:22:09.723394 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:22:09.824338 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:22:09.828378 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [amazon-ssm-agent] using named pipe channel for IPC Jul 2 00:22:09.828378 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jul 2 00:22:09.828378 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [amazon-ssm-agent] OS: linux, Arch: amd64 Jul 2 00:22:09.828570 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [amazon-ssm-agent] Starting Core Agent Jul 2 00:22:09.828570 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jul 2 00:22:09.828570 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [Registrar] Starting registrar module Jul 2 00:22:09.828570 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jul 2 00:22:09.828570 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [EC2Identity] EC2 registration was successful. Jul 2 00:22:09.828570 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [CredentialRefresher] credentialRefresher has started Jul 2 00:22:09.828570 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [CredentialRefresher] Starting credentials refresher loop Jul 2 00:22:09.828570 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jul 2 00:22:09.921013 amazon-ssm-agent[2172]: 2024-07-02 00:22:09 INFO [CredentialRefresher] Next credential rotation will be in 31.949992065933333 minutes Jul 2 00:22:09.939229 sshd_keygen[2070]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:22:09.980033 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 00:22:09.990625 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 00:22:10.009404 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:22:10.010160 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 00:22:10.023764 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 00:22:10.044883 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 00:22:10.056590 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 00:22:10.066812 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 00:22:10.068417 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 00:22:10.413140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:10.425653 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 00:22:10.429356 systemd[1]: Startup finished in 10.547s (kernel) + 9.045s (userspace) = 19.593s. Jul 2 00:22:10.592662 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 2 00:22:10.883883 amazon-ssm-agent[2172]: 2024-07-02 00:22:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jul 2 00:22:10.985503 amazon-ssm-agent[2172]: 2024-07-02 00:22:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2326) started Jul 2 00:22:11.086441 amazon-ssm-agent[2172]: 2024-07-02 00:22:10 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jul 2 00:22:11.545453 kubelet[2316]: E0702 00:22:11.545367 2316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:22:11.549258 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:22:11.549662 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:22:15.797148 systemd-resolved[1980]: Clock change detected. Flushing caches. Jul 2 00:22:16.232689 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 00:22:16.238919 systemd[1]: Started sshd@0-172.31.26.26:22-147.75.109.163:52884.service - OpenSSH per-connection server daemon (147.75.109.163:52884). Jul 2 00:22:16.411583 sshd[2340]: Accepted publickey for core from 147.75.109.163 port 52884 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:22:16.413598 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:16.428107 systemd-logind[2057]: New session 1 of user core. Jul 2 00:22:16.430284 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 00:22:16.440943 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 00:22:16.462189 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 00:22:16.481203 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 00:22:16.496658 (systemd)[2346]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:16.724311 systemd[2346]: Queued start job for default target default.target. Jul 2 00:22:16.725158 systemd[2346]: Created slice app.slice - User Application Slice. Jul 2 00:22:16.725188 systemd[2346]: Reached target paths.target - Paths. Jul 2 00:22:16.725245 systemd[2346]: Reached target timers.target - Timers. Jul 2 00:22:16.735616 systemd[2346]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 2 00:22:16.746366 systemd[2346]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 2 00:22:16.746459 systemd[2346]: Reached target sockets.target - Sockets. Jul 2 00:22:16.746530 systemd[2346]: Reached target basic.target - Basic System. Jul 2 00:22:16.748005 systemd[2346]: Reached target default.target - Main User Target. Jul 2 00:22:16.748305 systemd[2346]: Startup finished in 241ms. Jul 2 00:22:16.748683 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 00:22:16.766274 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 00:22:16.929939 systemd[1]: Started sshd@1-172.31.26.26:22-147.75.109.163:52894.service - OpenSSH per-connection server daemon (147.75.109.163:52894). Jul 2 00:22:17.112813 sshd[2358]: Accepted publickey for core from 147.75.109.163 port 52894 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:22:17.114954 sshd[2358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:17.122308 systemd-logind[2057]: New session 2 of user core. Jul 2 00:22:17.129995 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 00:22:17.258087 sshd[2358]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:17.262938 systemd[1]: sshd@1-172.31.26.26:22-147.75.109.163:52894.service: Deactivated successfully. Jul 2 00:22:17.269031 systemd-logind[2057]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:22:17.271917 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:22:17.278221 systemd-logind[2057]: Removed session 2. Jul 2 00:22:17.292789 systemd[1]: Started sshd@2-172.31.26.26:22-147.75.109.163:52908.service - OpenSSH per-connection server daemon (147.75.109.163:52908). Jul 2 00:22:17.480451 sshd[2366]: Accepted publickey for core from 147.75.109.163 port 52908 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:22:17.483056 sshd[2366]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:17.492578 systemd-logind[2057]: New session 3 of user core. Jul 2 00:22:17.504296 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 00:22:17.631123 sshd[2366]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:17.638993 systemd[1]: sshd@2-172.31.26.26:22-147.75.109.163:52908.service: Deactivated successfully. Jul 2 00:22:17.648839 systemd-logind[2057]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:22:17.649712 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:22:17.655420 systemd-logind[2057]: Removed session 3. Jul 2 00:22:17.661991 systemd[1]: Started sshd@3-172.31.26.26:22-147.75.109.163:52910.service - OpenSSH per-connection server daemon (147.75.109.163:52910). Jul 2 00:22:17.839309 sshd[2374]: Accepted publickey for core from 147.75.109.163 port 52910 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:22:17.841996 sshd[2374]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:17.855328 systemd-logind[2057]: New session 4 of user core. Jul 2 00:22:17.861983 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 00:22:17.996413 sshd[2374]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:18.002675 systemd[1]: sshd@3-172.31.26.26:22-147.75.109.163:52910.service: Deactivated successfully. Jul 2 00:22:18.009252 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:22:18.010606 systemd-logind[2057]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:22:18.011890 systemd-logind[2057]: Removed session 4. Jul 2 00:22:18.030012 systemd[1]: Started sshd@4-172.31.26.26:22-147.75.109.163:52914.service - OpenSSH per-connection server daemon (147.75.109.163:52914). Jul 2 00:22:18.219700 sshd[2382]: Accepted publickey for core from 147.75.109.163 port 52914 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:22:18.223401 sshd[2382]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:18.232587 systemd-logind[2057]: New session 5 of user core. Jul 2 00:22:18.240145 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 00:22:18.384900 sudo[2386]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 2 00:22:18.385994 sudo[2386]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:18.411354 sudo[2386]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:18.435466 sshd[2382]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:18.439983 systemd[1]: sshd@4-172.31.26.26:22-147.75.109.163:52914.service: Deactivated successfully. Jul 2 00:22:18.450102 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:22:18.450376 systemd-logind[2057]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:22:18.452578 systemd-logind[2057]: Removed session 5. Jul 2 00:22:18.467078 systemd[1]: Started sshd@5-172.31.26.26:22-147.75.109.163:52920.service - OpenSSH per-connection server daemon (147.75.109.163:52920). Jul 2 00:22:18.624285 sshd[2391]: Accepted publickey for core from 147.75.109.163 port 52920 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:22:18.625413 sshd[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:18.633120 systemd-logind[2057]: New session 6 of user core. Jul 2 00:22:18.644252 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 00:22:18.749594 sudo[2396]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 2 00:22:18.750239 sudo[2396]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:18.755856 sudo[2396]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:18.763190 sudo[2395]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 2 00:22:18.763668 sudo[2395]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:18.789285 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 2 00:22:18.806501 auditctl[2399]: No rules Jul 2 00:22:18.807192 systemd[1]: audit-rules.service: Deactivated successfully. Jul 2 00:22:18.807657 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 2 00:22:18.820061 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 00:22:18.862564 augenrules[2418]: No rules Jul 2 00:22:18.865323 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 00:22:18.870586 sudo[2395]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:18.895325 sshd[2391]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:18.900250 systemd[1]: sshd@5-172.31.26.26:22-147.75.109.163:52920.service: Deactivated successfully. Jul 2 00:22:18.906516 systemd-logind[2057]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:22:18.907812 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:22:18.909020 systemd-logind[2057]: Removed session 6. Jul 2 00:22:18.927376 systemd[1]: Started sshd@6-172.31.26.26:22-147.75.109.163:52924.service - OpenSSH per-connection server daemon (147.75.109.163:52924). Jul 2 00:22:19.099857 sshd[2427]: Accepted publickey for core from 147.75.109.163 port 52924 ssh2: RSA SHA256:hOHwc07yIE+s3jG8mNGGZeNqnQT2J5yS2IqkiZZysIk Jul 2 00:22:19.102574 sshd[2427]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:22:19.115449 systemd-logind[2057]: New session 7 of user core. Jul 2 00:22:19.122162 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 00:22:19.225463 sudo[2431]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:22:19.225865 sudo[2431]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:22:20.592080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:20.605638 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:20.632312 systemd[1]: Reloading requested from client PID 2470 ('systemctl') (unit session-7.scope)... Jul 2 00:22:20.632339 systemd[1]: Reloading... Jul 2 00:22:20.746259 zram_generator::config[2508]: No configuration found. Jul 2 00:22:20.891368 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:22:20.987262 systemd[1]: Reloading finished in 354 ms. Jul 2 00:22:21.037744 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 2 00:22:21.037967 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 2 00:22:21.039621 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:21.048956 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 00:22:21.630690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 00:22:21.636774 (kubelet)[2576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 2 00:22:21.685472 kubelet[2576]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:21.685472 kubelet[2576]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:22:21.685472 kubelet[2576]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:22:21.686009 kubelet[2576]: I0702 00:22:21.685578 2576 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:22:21.911729 kubelet[2576]: I0702 00:22:21.911273 2576 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jul 2 00:22:21.911729 kubelet[2576]: I0702 00:22:21.911304 2576 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:22:21.911729 kubelet[2576]: I0702 00:22:21.911628 2576 server.go:895] "Client rotation is on, will bootstrap in background" Jul 2 00:22:21.926895 kubelet[2576]: I0702 00:22:21.926815 2576 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:22:21.941833 kubelet[2576]: I0702 00:22:21.941804 2576 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:22:21.945836 kubelet[2576]: I0702 00:22:21.945352 2576 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:22:21.945836 kubelet[2576]: I0702 00:22:21.945764 2576 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:22:21.945836 kubelet[2576]: I0702 00:22:21.945793 2576 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:22:21.945836 kubelet[2576]: I0702 00:22:21.945807 2576 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:22:21.946592 kubelet[2576]: I0702 00:22:21.946569 2576 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:22:21.948068 kubelet[2576]: I0702 00:22:21.948040 2576 kubelet.go:393] "Attempting to sync node with API server" Jul 2 00:22:21.948068 kubelet[2576]: I0702 00:22:21.948071 2576 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:22:21.948190 kubelet[2576]: I0702 00:22:21.948099 2576 kubelet.go:309] "Adding apiserver pod source" Jul 2 00:22:21.948190 kubelet[2576]: I0702 00:22:21.948114 2576 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:22:21.950221 kubelet[2576]: I0702 00:22:21.950194 2576 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Jul 2 00:22:21.954588 kubelet[2576]: E0702 00:22:21.953500 2576 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:21.954588 kubelet[2576]: E0702 00:22:21.953585 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:21.954588 kubelet[2576]: W0702 00:22:21.953734 2576 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:22:21.954588 kubelet[2576]: I0702 00:22:21.954417 2576 server.go:1232] "Started kubelet" Jul 2 00:22:21.956509 kubelet[2576]: I0702 00:22:21.955940 2576 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:22:21.956609 kubelet[2576]: I0702 00:22:21.956574 2576 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:22:21.957372 kubelet[2576]: I0702 00:22:21.957348 2576 server.go:462] "Adding debug handlers to kubelet server" Jul 2 00:22:21.958411 kubelet[2576]: I0702 00:22:21.958381 2576 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jul 2 00:22:21.959144 kubelet[2576]: I0702 00:22:21.958822 2576 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:22:21.964185 kubelet[2576]: I0702 00:22:21.964119 2576 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:22:21.964634 kubelet[2576]: I0702 00:22:21.964452 2576 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 00:22:21.964852 kubelet[2576]: I0702 00:22:21.964772 2576 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 00:22:21.967045 kubelet[2576]: E0702 00:22:21.966949 2576 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jul 2 00:22:21.967045 kubelet[2576]: E0702 00:22:21.966983 2576 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:22:21.992244 kubelet[2576]: E0702 00:22:21.991837 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.26.26\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 2 00:22:21.992244 kubelet[2576]: W0702 00:22:21.991895 2576 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.26.26" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 00:22:21.992244 kubelet[2576]: E0702 00:22:21.991920 2576 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.26.26" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 2 00:22:21.992244 kubelet[2576]: W0702 00:22:21.991959 2576 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 00:22:21.992244 kubelet[2576]: E0702 00:22:21.991972 2576 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 2 00:22:21.992576 kubelet[2576]: E0702 00:22:21.992023 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d835394f20e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 21, 954388494, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 21, 954388494, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:21.996293 kubelet[2576]: W0702 00:22:21.996234 2576 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 00:22:21.996293 kubelet[2576]: E0702 00:22:21.996269 2576 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 2 00:22:21.997288 kubelet[2576]: E0702 00:22:21.996788 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d835454dec2", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 21, 966966466, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 21, 966966466, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.042878 kubelet[2576]: E0702 00:22:22.042800 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac059f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.26 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39786911, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39786911, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.043227 kubelet[2576]: I0702 00:22:22.043071 2576 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:22:22.044120 kubelet[2576]: I0702 00:22:22.043286 2576 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:22:22.044120 kubelet[2576]: I0702 00:22:22.043306 2576 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:22:22.046162 kubelet[2576]: E0702 00:22:22.046088 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac201c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.26 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39793692, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39793692, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.046777 kubelet[2576]: I0702 00:22:22.046749 2576 policy_none.go:49] "None policy: Start" Jul 2 00:22:22.049137 kubelet[2576]: I0702 00:22:22.049124 2576 memory_manager.go:169] "Starting memorymanager" policy="None" Jul 2 00:22:22.049213 kubelet[2576]: I0702 00:22:22.049207 2576 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:22:22.049688 kubelet[2576]: E0702 00:22:22.048467 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac2bea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.26 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39796714, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39796714, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.065916 kubelet[2576]: I0702 00:22:22.065117 2576 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.26" Jul 2 00:22:22.071194 kubelet[2576]: I0702 00:22:22.070245 2576 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:22:22.071194 kubelet[2576]: I0702 00:22:22.070613 2576 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:22:22.074499 kubelet[2576]: E0702 00:22:22.073074 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac059f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.26 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39786911, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 65072386, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac059f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.076830 kubelet[2576]: E0702 00:22:22.073829 2576 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.26.26\" not found" Jul 2 00:22:22.076830 kubelet[2576]: E0702 00:22:22.074243 2576 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.26" Jul 2 00:22:22.079933 kubelet[2576]: E0702 00:22:22.079839 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac201c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.26 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39793692, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 65082534, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac201c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.082199 kubelet[2576]: E0702 00:22:22.082016 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac2bea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.26 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39796714, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 65085146, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac2bea" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.087514 kubelet[2576]: E0702 00:22:22.084951 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d835aa30eb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 72753848, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 72753848, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.094230 kubelet[2576]: I0702 00:22:22.094200 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:22:22.096409 kubelet[2576]: I0702 00:22:22.095750 2576 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:22:22.096409 kubelet[2576]: I0702 00:22:22.095779 2576 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:22:22.096409 kubelet[2576]: I0702 00:22:22.095806 2576 kubelet.go:2303] "Starting kubelet main sync loop" Jul 2 00:22:22.096409 kubelet[2576]: E0702 00:22:22.095926 2576 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jul 2 00:22:22.099116 kubelet[2576]: W0702 00:22:22.099092 2576 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 00:22:22.099212 kubelet[2576]: E0702 00:22:22.099127 2576 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Jul 2 00:22:22.193962 kubelet[2576]: E0702 00:22:22.193837 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.26.26\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jul 2 00:22:22.278249 kubelet[2576]: I0702 00:22:22.278074 2576 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.26" Jul 2 00:22:22.280324 kubelet[2576]: E0702 00:22:22.280290 2576 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.26" Jul 2 00:22:22.281437 kubelet[2576]: E0702 00:22:22.281357 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac059f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.26 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39786911, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 278014322, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac059f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.282762 kubelet[2576]: E0702 00:22:22.282687 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac201c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.26 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39793692, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 278024170, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac201c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.285192 kubelet[2576]: E0702 00:22:22.285111 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac2bea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.26 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39796714, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 278029356, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac2bea" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.595646 kubelet[2576]: E0702 00:22:22.595611 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.26.26\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Jul 2 00:22:22.682287 kubelet[2576]: I0702 00:22:22.682047 2576 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.26" Jul 2 00:22:22.685164 kubelet[2576]: E0702 00:22:22.684833 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac059f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.26.26 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39786911, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 681987658, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac059f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.685164 kubelet[2576]: E0702 00:22:22.685035 2576 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.26.26" Jul 2 00:22:22.685906 kubelet[2576]: E0702 00:22:22.685827 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac201c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.26.26 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39793692, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 681996014, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac201c" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.686788 kubelet[2576]: E0702 00:22:22.686716 2576 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26.17de3d8358ac2bea", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.26.26", UID:"172.31.26.26", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.26.26 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.26.26"}, FirstTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 39796714, time.Local), LastTimestamp:time.Date(2024, time.July, 2, 0, 22, 22, 682005478, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.26.26"}': 'events "172.31.26.26.17de3d8358ac2bea" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Jul 2 00:22:22.914965 kubelet[2576]: I0702 00:22:22.914830 2576 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 2 00:22:22.954504 kubelet[2576]: E0702 00:22:22.954446 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:23.366911 kubelet[2576]: E0702 00:22:23.366867 2576 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.26.26" not found Jul 2 00:22:23.401804 kubelet[2576]: E0702 00:22:23.401729 2576 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.26.26\" not found" node="172.31.26.26" Jul 2 00:22:23.486326 kubelet[2576]: I0702 00:22:23.486299 2576 kubelet_node_status.go:70] "Attempting to register node" node="172.31.26.26" Jul 2 00:22:23.493832 kubelet[2576]: I0702 00:22:23.493798 2576 kubelet_node_status.go:73] "Successfully registered node" node="172.31.26.26" Jul 2 00:22:23.518474 kubelet[2576]: E0702 00:22:23.518424 2576 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.26.26\" not found" Jul 2 00:22:23.619531 kubelet[2576]: I0702 00:22:23.619404 2576 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 2 00:22:23.620609 containerd[2082]: time="2024-07-02T00:22:23.620424353Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:22:23.621242 kubelet[2576]: I0702 00:22:23.621216 2576 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 2 00:22:23.652758 sudo[2431]: pam_unix(sudo:session): session closed for user root Jul 2 00:22:23.677038 sshd[2427]: pam_unix(sshd:session): session closed for user core Jul 2 00:22:23.680951 systemd[1]: sshd@6-172.31.26.26:22-147.75.109.163:52924.service: Deactivated successfully. Jul 2 00:22:23.695693 systemd-logind[2057]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:22:23.695979 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:22:23.708231 systemd-logind[2057]: Removed session 7. Jul 2 00:22:23.951476 kubelet[2576]: I0702 00:22:23.951155 2576 apiserver.go:52] "Watching apiserver" Jul 2 00:22:23.955420 kubelet[2576]: E0702 00:22:23.955356 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:23.971813 kubelet[2576]: I0702 00:22:23.971776 2576 topology_manager.go:215] "Topology Admit Handler" podUID="7b52e0cb-546b-4204-b5bc-69216c2be074" podNamespace="calico-system" podName="calico-node-w9lnf" Jul 2 00:22:23.971943 kubelet[2576]: I0702 00:22:23.971936 2576 topology_manager.go:215] "Topology Admit Handler" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" podNamespace="calico-system" podName="csi-node-driver-x5dxg" Jul 2 00:22:23.972021 kubelet[2576]: I0702 00:22:23.972001 2576 topology_manager.go:215] "Topology Admit Handler" podUID="5b463649-d58a-4be5-8411-7959f273f65a" podNamespace="kube-system" podName="kube-proxy-xkvkv" Jul 2 00:22:23.979906 kubelet[2576]: E0702 00:22:23.979869 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:24.065839 kubelet[2576]: I0702 00:22:24.065781 2576 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 00:22:24.073888 kubelet[2576]: I0702 00:22:24.073686 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5b463649-d58a-4be5-8411-7959f273f65a-xtables-lock\") pod \"kube-proxy-xkvkv\" (UID: \"5b463649-d58a-4be5-8411-7959f273f65a\") " pod="kube-system/kube-proxy-xkvkv" Jul 2 00:22:24.073888 kubelet[2576]: I0702 00:22:24.073750 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-xtables-lock\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.073888 kubelet[2576]: I0702 00:22:24.073807 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7b52e0cb-546b-4204-b5bc-69216c2be074-tigera-ca-bundle\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.073888 kubelet[2576]: I0702 00:22:24.073867 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5b4fb344-60af-4260-b6c2-41ad84a8e2e0-varrun\") pod \"csi-node-driver-x5dxg\" (UID: \"5b4fb344-60af-4260-b6c2-41ad84a8e2e0\") " pod="calico-system/csi-node-driver-x5dxg" Jul 2 00:22:24.074171 kubelet[2576]: I0702 00:22:24.073910 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5b4fb344-60af-4260-b6c2-41ad84a8e2e0-kubelet-dir\") pod \"csi-node-driver-x5dxg\" (UID: \"5b4fb344-60af-4260-b6c2-41ad84a8e2e0\") " pod="calico-system/csi-node-driver-x5dxg" Jul 2 00:22:24.074171 kubelet[2576]: I0702 00:22:24.073947 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5b4fb344-60af-4260-b6c2-41ad84a8e2e0-registration-dir\") pod \"csi-node-driver-x5dxg\" (UID: \"5b4fb344-60af-4260-b6c2-41ad84a8e2e0\") " pod="calico-system/csi-node-driver-x5dxg" Jul 2 00:22:24.074171 kubelet[2576]: I0702 00:22:24.073974 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltn9z\" (UniqueName: \"kubernetes.io/projected/5b4fb344-60af-4260-b6c2-41ad84a8e2e0-kube-api-access-ltn9z\") pod \"csi-node-driver-x5dxg\" (UID: \"5b4fb344-60af-4260-b6c2-41ad84a8e2e0\") " pod="calico-system/csi-node-driver-x5dxg" Jul 2 00:22:24.074171 kubelet[2576]: I0702 00:22:24.074015 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-lib-modules\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074171 kubelet[2576]: I0702 00:22:24.074043 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-var-lib-calico\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074370 kubelet[2576]: I0702 00:22:24.074074 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-cni-net-dir\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074370 kubelet[2576]: I0702 00:22:24.074125 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-cni-log-dir\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074370 kubelet[2576]: I0702 00:22:24.074156 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-flexvol-driver-host\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074370 kubelet[2576]: I0702 00:22:24.074193 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz92q\" (UniqueName: \"kubernetes.io/projected/5b463649-d58a-4be5-8411-7959f273f65a-kube-api-access-cz92q\") pod \"kube-proxy-xkvkv\" (UID: \"5b463649-d58a-4be5-8411-7959f273f65a\") " pod="kube-system/kube-proxy-xkvkv" Jul 2 00:22:24.074370 kubelet[2576]: I0702 00:22:24.074228 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7b52e0cb-546b-4204-b5bc-69216c2be074-node-certs\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074576 kubelet[2576]: I0702 00:22:24.074260 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-cni-bin-dir\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074576 kubelet[2576]: I0702 00:22:24.074296 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-974mc\" (UniqueName: \"kubernetes.io/projected/7b52e0cb-546b-4204-b5bc-69216c2be074-kube-api-access-974mc\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074576 kubelet[2576]: I0702 00:22:24.074350 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5b4fb344-60af-4260-b6c2-41ad84a8e2e0-socket-dir\") pod \"csi-node-driver-x5dxg\" (UID: \"5b4fb344-60af-4260-b6c2-41ad84a8e2e0\") " pod="calico-system/csi-node-driver-x5dxg" Jul 2 00:22:24.074576 kubelet[2576]: I0702 00:22:24.074380 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-policysync\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074576 kubelet[2576]: I0702 00:22:24.074411 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7b52e0cb-546b-4204-b5bc-69216c2be074-var-run-calico\") pod \"calico-node-w9lnf\" (UID: \"7b52e0cb-546b-4204-b5bc-69216c2be074\") " pod="calico-system/calico-node-w9lnf" Jul 2 00:22:24.074811 kubelet[2576]: I0702 00:22:24.074442 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5b463649-d58a-4be5-8411-7959f273f65a-kube-proxy\") pod \"kube-proxy-xkvkv\" (UID: \"5b463649-d58a-4be5-8411-7959f273f65a\") " pod="kube-system/kube-proxy-xkvkv" Jul 2 00:22:24.074811 kubelet[2576]: I0702 00:22:24.074475 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5b463649-d58a-4be5-8411-7959f273f65a-lib-modules\") pod \"kube-proxy-xkvkv\" (UID: \"5b463649-d58a-4be5-8411-7959f273f65a\") " pod="kube-system/kube-proxy-xkvkv" Jul 2 00:22:24.181375 kubelet[2576]: E0702 00:22:24.181264 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:24.181375 kubelet[2576]: W0702 00:22:24.181287 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:24.181375 kubelet[2576]: E0702 00:22:24.181314 2576 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:24.222197 kubelet[2576]: E0702 00:22:24.222102 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:24.222197 kubelet[2576]: W0702 00:22:24.222144 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:24.222197 kubelet[2576]: E0702 00:22:24.222171 2576 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:24.241887 kubelet[2576]: E0702 00:22:24.241608 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:24.241887 kubelet[2576]: W0702 00:22:24.241636 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:24.241887 kubelet[2576]: E0702 00:22:24.241674 2576 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:24.245339 kubelet[2576]: E0702 00:22:24.245262 2576 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 2 00:22:24.245339 kubelet[2576]: W0702 00:22:24.245280 2576 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 2 00:22:24.245339 kubelet[2576]: E0702 00:22:24.245304 2576 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 2 00:22:24.276652 containerd[2082]: time="2024-07-02T00:22:24.276597894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xkvkv,Uid:5b463649-d58a-4be5-8411-7959f273f65a,Namespace:kube-system,Attempt:0,}" Jul 2 00:22:24.284331 containerd[2082]: time="2024-07-02T00:22:24.284288808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w9lnf,Uid:7b52e0cb-546b-4204-b5bc-69216c2be074,Namespace:calico-system,Attempt:0,}" Jul 2 00:22:24.917948 containerd[2082]: time="2024-07-02T00:22:24.917890302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:24.919444 containerd[2082]: time="2024-07-02T00:22:24.919398144Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:24.920968 containerd[2082]: time="2024-07-02T00:22:24.920752863Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 2 00:22:24.922067 containerd[2082]: time="2024-07-02T00:22:24.921893621Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 00:22:24.924521 containerd[2082]: time="2024-07-02T00:22:24.923988181Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:24.928514 containerd[2082]: time="2024-07-02T00:22:24.928304354Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 00:22:24.930522 containerd[2082]: time="2024-07-02T00:22:24.929383315Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 644.972975ms" Jul 2 00:22:24.932268 containerd[2082]: time="2024-07-02T00:22:24.932226809Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 655.489195ms" Jul 2 00:22:24.956462 kubelet[2576]: E0702 00:22:24.956334 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:25.209664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041736461.mount: Deactivated successfully. Jul 2 00:22:25.326090 containerd[2082]: time="2024-07-02T00:22:25.325988315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:25.326319 containerd[2082]: time="2024-07-02T00:22:25.326213807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:25.326452 containerd[2082]: time="2024-07-02T00:22:25.326370107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:25.326599 containerd[2082]: time="2024-07-02T00:22:25.326521829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:25.349194 containerd[2082]: time="2024-07-02T00:22:25.349008903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:25.349991 containerd[2082]: time="2024-07-02T00:22:25.349908963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:25.350437 containerd[2082]: time="2024-07-02T00:22:25.349977718Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:25.350437 containerd[2082]: time="2024-07-02T00:22:25.350366644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:25.607420 containerd[2082]: time="2024-07-02T00:22:25.607254922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-w9lnf,Uid:7b52e0cb-546b-4204-b5bc-69216c2be074,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec51c834d6d68af8cc8a29980b3581d31092aa132c1824287b71f87283041188\"" Jul 2 00:22:25.611018 containerd[2082]: time="2024-07-02T00:22:25.610978647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jul 2 00:22:25.616030 containerd[2082]: time="2024-07-02T00:22:25.615987989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xkvkv,Uid:5b463649-d58a-4be5-8411-7959f273f65a,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ab2e623dd2b6c6afcbc4fd849a5af8f84f052235bdf8e90ad6a63bafd04691c\"" Jul 2 00:22:25.957504 kubelet[2576]: E0702 00:22:25.957369 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:26.097149 kubelet[2576]: E0702 00:22:26.097108 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:26.862208 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1480431819.mount: Deactivated successfully. Jul 2 00:22:26.957771 kubelet[2576]: E0702 00:22:26.957705 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:27.048649 containerd[2082]: time="2024-07-02T00:22:27.048598677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:27.050959 containerd[2082]: time="2024-07-02T00:22:27.050699474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=6588466" Jul 2 00:22:27.053204 containerd[2082]: time="2024-07-02T00:22:27.053143935Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:27.056943 containerd[2082]: time="2024-07-02T00:22:27.055975234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:27.056943 containerd[2082]: time="2024-07-02T00:22:27.056628751Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.445612108s" Jul 2 00:22:27.056943 containerd[2082]: time="2024-07-02T00:22:27.056734211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Jul 2 00:22:27.058322 containerd[2082]: time="2024-07-02T00:22:27.058293168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jul 2 00:22:27.059515 containerd[2082]: time="2024-07-02T00:22:27.059386451Z" level=info msg="CreateContainer within sandbox \"ec51c834d6d68af8cc8a29980b3581d31092aa132c1824287b71f87283041188\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 2 00:22:27.092278 containerd[2082]: time="2024-07-02T00:22:27.092240074Z" level=info msg="CreateContainer within sandbox \"ec51c834d6d68af8cc8a29980b3581d31092aa132c1824287b71f87283041188\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"00bca3082ed02180ef4e2e7860a0aeafa38339193c9d6bc538b493c68920a3a9\"" Jul 2 00:22:27.093058 containerd[2082]: time="2024-07-02T00:22:27.092996822Z" level=info msg="StartContainer for \"00bca3082ed02180ef4e2e7860a0aeafa38339193c9d6bc538b493c68920a3a9\"" Jul 2 00:22:27.224052 containerd[2082]: time="2024-07-02T00:22:27.223598604Z" level=info msg="StartContainer for \"00bca3082ed02180ef4e2e7860a0aeafa38339193c9d6bc538b493c68920a3a9\" returns successfully" Jul 2 00:22:27.287741 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00bca3082ed02180ef4e2e7860a0aeafa38339193c9d6bc538b493c68920a3a9-rootfs.mount: Deactivated successfully. Jul 2 00:22:27.495854 containerd[2082]: time="2024-07-02T00:22:27.494559846Z" level=info msg="shim disconnected" id=00bca3082ed02180ef4e2e7860a0aeafa38339193c9d6bc538b493c68920a3a9 namespace=k8s.io Jul 2 00:22:27.495854 containerd[2082]: time="2024-07-02T00:22:27.495453717Z" level=warning msg="cleaning up after shim disconnected" id=00bca3082ed02180ef4e2e7860a0aeafa38339193c9d6bc538b493c68920a3a9 namespace=k8s.io Jul 2 00:22:27.495854 containerd[2082]: time="2024-07-02T00:22:27.495543622Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:27.528597 containerd[2082]: time="2024-07-02T00:22:27.528538578Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:22:27Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 00:22:27.958859 kubelet[2576]: E0702 00:22:27.958807 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:28.101524 kubelet[2576]: E0702 00:22:28.099468 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:28.675437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474465867.mount: Deactivated successfully. Jul 2 00:22:28.959663 kubelet[2576]: E0702 00:22:28.959198 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:29.453321 containerd[2082]: time="2024-07-02T00:22:29.453271276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:29.454705 containerd[2082]: time="2024-07-02T00:22:29.454508359Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=28118419" Jul 2 00:22:29.456784 containerd[2082]: time="2024-07-02T00:22:29.456725559Z" level=info msg="ImageCreate event name:\"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:29.460206 containerd[2082]: time="2024-07-02T00:22:29.460148681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:29.461349 containerd[2082]: time="2024-07-02T00:22:29.460956687Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"28117438\" in 2.402624919s" Jul 2 00:22:29.461349 containerd[2082]: time="2024-07-02T00:22:29.461000282Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:a3eea76ce409e136fe98838847fda217ce169eb7d1ceef544671d75f68e5a29c\"" Jul 2 00:22:29.462467 containerd[2082]: time="2024-07-02T00:22:29.462045214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jul 2 00:22:29.463682 containerd[2082]: time="2024-07-02T00:22:29.463650512Z" level=info msg="CreateContainer within sandbox \"4ab2e623dd2b6c6afcbc4fd849a5af8f84f052235bdf8e90ad6a63bafd04691c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:22:29.494679 containerd[2082]: time="2024-07-02T00:22:29.494631797Z" level=info msg="CreateContainer within sandbox \"4ab2e623dd2b6c6afcbc4fd849a5af8f84f052235bdf8e90ad6a63bafd04691c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8f91f0ab81656dc0a439ab70cae722c6502d029a73183b4d4700a67ace7b5b28\"" Jul 2 00:22:29.495447 containerd[2082]: time="2024-07-02T00:22:29.495412245Z" level=info msg="StartContainer for \"8f91f0ab81656dc0a439ab70cae722c6502d029a73183b4d4700a67ace7b5b28\"" Jul 2 00:22:29.598428 containerd[2082]: time="2024-07-02T00:22:29.598364880Z" level=info msg="StartContainer for \"8f91f0ab81656dc0a439ab70cae722c6502d029a73183b4d4700a67ace7b5b28\" returns successfully" Jul 2 00:22:29.959663 kubelet[2576]: E0702 00:22:29.959394 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:30.098854 kubelet[2576]: E0702 00:22:30.097009 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:30.960505 kubelet[2576]: E0702 00:22:30.959941 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:31.960763 kubelet[2576]: E0702 00:22:31.960724 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:32.098825 kubelet[2576]: E0702 00:22:32.098797 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:32.961820 kubelet[2576]: E0702 00:22:32.961770 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:33.963114 kubelet[2576]: E0702 00:22:33.963084 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:34.098188 kubelet[2576]: E0702 00:22:34.097822 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:34.562458 containerd[2082]: time="2024-07-02T00:22:34.562410781Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.563943 containerd[2082]: time="2024-07-02T00:22:34.563807184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Jul 2 00:22:34.567160 containerd[2082]: time="2024-07-02T00:22:34.565662798Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.568622 containerd[2082]: time="2024-07-02T00:22:34.568579129Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:34.569519 containerd[2082]: time="2024-07-02T00:22:34.569457360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.107373344s" Jul 2 00:22:34.569636 containerd[2082]: time="2024-07-02T00:22:34.569520662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Jul 2 00:22:34.571643 containerd[2082]: time="2024-07-02T00:22:34.571610154Z" level=info msg="CreateContainer within sandbox \"ec51c834d6d68af8cc8a29980b3581d31092aa132c1824287b71f87283041188\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 00:22:34.594088 containerd[2082]: time="2024-07-02T00:22:34.594034266Z" level=info msg="CreateContainer within sandbox \"ec51c834d6d68af8cc8a29980b3581d31092aa132c1824287b71f87283041188\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"de26d5045e417c598c6cc42538a0011261c04b717bf0e5ca91e2ba3f55d8ae3c\"" Jul 2 00:22:34.596124 containerd[2082]: time="2024-07-02T00:22:34.594792540Z" level=info msg="StartContainer for \"de26d5045e417c598c6cc42538a0011261c04b717bf0e5ca91e2ba3f55d8ae3c\"" Jul 2 00:22:34.664146 containerd[2082]: time="2024-07-02T00:22:34.664075381Z" level=info msg="StartContainer for \"de26d5045e417c598c6cc42538a0011261c04b717bf0e5ca91e2ba3f55d8ae3c\" returns successfully" Jul 2 00:22:34.964637 kubelet[2576]: E0702 00:22:34.964578 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:35.191161 kubelet[2576]: I0702 00:22:35.191119 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-xkvkv" podStartSLOduration=8.346788285 podCreationTimestamp="2024-07-02 00:22:23 +0000 UTC" firstStartedPulling="2024-07-02 00:22:25.6175216 +0000 UTC m=+3.976723368" lastFinishedPulling="2024-07-02 00:22:29.461789145 +0000 UTC m=+7.820990925" observedRunningTime="2024-07-02 00:22:30.211231867 +0000 UTC m=+8.570433655" watchObservedRunningTime="2024-07-02 00:22:35.191055842 +0000 UTC m=+13.550257632" Jul 2 00:22:35.965535 kubelet[2576]: E0702 00:22:35.965468 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:36.098213 kubelet[2576]: E0702 00:22:36.097808 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:36.196734 containerd[2082]: time="2024-07-02T00:22:36.196659861Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:22:36.234418 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de26d5045e417c598c6cc42538a0011261c04b717bf0e5ca91e2ba3f55d8ae3c-rootfs.mount: Deactivated successfully. Jul 2 00:22:36.254076 kubelet[2576]: I0702 00:22:36.253830 2576 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jul 2 00:22:36.605781 containerd[2082]: time="2024-07-02T00:22:36.605719864Z" level=info msg="shim disconnected" id=de26d5045e417c598c6cc42538a0011261c04b717bf0e5ca91e2ba3f55d8ae3c namespace=k8s.io Jul 2 00:22:36.605781 containerd[2082]: time="2024-07-02T00:22:36.605774118Z" level=warning msg="cleaning up after shim disconnected" id=de26d5045e417c598c6cc42538a0011261c04b717bf0e5ca91e2ba3f55d8ae3c namespace=k8s.io Jul 2 00:22:36.605781 containerd[2082]: time="2024-07-02T00:22:36.605785437Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 00:22:36.966734 kubelet[2576]: E0702 00:22:36.966577 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:37.173627 containerd[2082]: time="2024-07-02T00:22:37.173060154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jul 2 00:22:37.967086 kubelet[2576]: E0702 00:22:37.966975 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:38.100757 containerd[2082]: time="2024-07-02T00:22:38.100299641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x5dxg,Uid:5b4fb344-60af-4260-b6c2-41ad84a8e2e0,Namespace:calico-system,Attempt:0,}" Jul 2 00:22:38.207436 containerd[2082]: time="2024-07-02T00:22:38.207388421Z" level=error msg="Failed to destroy network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:38.209170 containerd[2082]: time="2024-07-02T00:22:38.208972906Z" level=error msg="encountered an error cleaning up failed sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:38.209170 containerd[2082]: time="2024-07-02T00:22:38.209052966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x5dxg,Uid:5b4fb344-60af-4260-b6c2-41ad84a8e2e0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:38.212709 kubelet[2576]: E0702 00:22:38.212684 2576 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:38.213009 kubelet[2576]: E0702 00:22:38.212757 2576 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x5dxg" Jul 2 00:22:38.213009 kubelet[2576]: E0702 00:22:38.212787 2576 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x5dxg" Jul 2 00:22:38.213009 kubelet[2576]: E0702 00:22:38.212854 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x5dxg_calico-system(5b4fb344-60af-4260-b6c2-41ad84a8e2e0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x5dxg_calico-system(5b4fb344-60af-4260-b6c2-41ad84a8e2e0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:38.213206 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356-shm.mount: Deactivated successfully. Jul 2 00:22:38.348517 kubelet[2576]: I0702 00:22:38.342612 2576 topology_manager.go:215] "Topology Admit Handler" podUID="86df1bea-06e9-4eb3-9a03-c4d5b6430f31" podNamespace="default" podName="nginx-deployment-6d5f899847-rwhhv" Jul 2 00:22:38.390075 kubelet[2576]: I0702 00:22:38.389981 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snfnt\" (UniqueName: \"kubernetes.io/projected/86df1bea-06e9-4eb3-9a03-c4d5b6430f31-kube-api-access-snfnt\") pod \"nginx-deployment-6d5f899847-rwhhv\" (UID: \"86df1bea-06e9-4eb3-9a03-c4d5b6430f31\") " pod="default/nginx-deployment-6d5f899847-rwhhv" Jul 2 00:22:38.655562 containerd[2082]: time="2024-07-02T00:22:38.655436012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rwhhv,Uid:86df1bea-06e9-4eb3-9a03-c4d5b6430f31,Namespace:default,Attempt:0,}" Jul 2 00:22:38.817843 containerd[2082]: time="2024-07-02T00:22:38.817791108Z" level=error msg="Failed to destroy network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:38.818526 containerd[2082]: time="2024-07-02T00:22:38.818213095Z" level=error msg="encountered an error cleaning up failed sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:38.818526 containerd[2082]: time="2024-07-02T00:22:38.818272496Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rwhhv,Uid:86df1bea-06e9-4eb3-9a03-c4d5b6430f31,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:38.818822 kubelet[2576]: E0702 00:22:38.818564 2576 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:38.818822 kubelet[2576]: E0702 00:22:38.818638 2576 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-rwhhv" Jul 2 00:22:38.818822 kubelet[2576]: E0702 00:22:38.818666 2576 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-rwhhv" Jul 2 00:22:38.819655 kubelet[2576]: E0702 00:22:38.818744 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-rwhhv_default(86df1bea-06e9-4eb3-9a03-c4d5b6430f31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-rwhhv_default(86df1bea-06e9-4eb3-9a03-c4d5b6430f31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-rwhhv" podUID="86df1bea-06e9-4eb3-9a03-c4d5b6430f31" Jul 2 00:22:38.968600 kubelet[2576]: E0702 00:22:38.968085 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:39.124451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f-shm.mount: Deactivated successfully. Jul 2 00:22:39.176913 kubelet[2576]: I0702 00:22:39.176884 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:22:39.177703 containerd[2082]: time="2024-07-02T00:22:39.177662242Z" level=info msg="StopPodSandbox for \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\"" Jul 2 00:22:39.178192 containerd[2082]: time="2024-07-02T00:22:39.177897164Z" level=info msg="Ensure that sandbox 1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f in task-service has been cleanup successfully" Jul 2 00:22:39.180434 kubelet[2576]: I0702 00:22:39.179973 2576 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:22:39.180768 containerd[2082]: time="2024-07-02T00:22:39.180729942Z" level=info msg="StopPodSandbox for \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\"" Jul 2 00:22:39.180976 containerd[2082]: time="2024-07-02T00:22:39.180952798Z" level=info msg="Ensure that sandbox 464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356 in task-service has been cleanup successfully" Jul 2 00:22:39.253663 containerd[2082]: time="2024-07-02T00:22:39.252808728Z" level=error msg="StopPodSandbox for \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\" failed" error="failed to destroy network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:39.255135 kubelet[2576]: E0702 00:22:39.254524 2576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:22:39.255135 kubelet[2576]: E0702 00:22:39.254660 2576 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356"} Jul 2 00:22:39.255135 kubelet[2576]: E0702 00:22:39.254713 2576 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5b4fb344-60af-4260-b6c2-41ad84a8e2e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:22:39.255135 kubelet[2576]: E0702 00:22:39.254779 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5b4fb344-60af-4260-b6c2-41ad84a8e2e0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x5dxg" podUID="5b4fb344-60af-4260-b6c2-41ad84a8e2e0" Jul 2 00:22:39.256165 containerd[2082]: time="2024-07-02T00:22:39.256119882Z" level=error msg="StopPodSandbox for \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\" failed" error="failed to destroy network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 2 00:22:39.256659 kubelet[2576]: E0702 00:22:39.256633 2576 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:22:39.256836 kubelet[2576]: E0702 00:22:39.256679 2576 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f"} Jul 2 00:22:39.256836 kubelet[2576]: E0702 00:22:39.256805 2576 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"86df1bea-06e9-4eb3-9a03-c4d5b6430f31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 2 00:22:39.257093 kubelet[2576]: E0702 00:22:39.256846 2576 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"86df1bea-06e9-4eb3-9a03-c4d5b6430f31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-rwhhv" podUID="86df1bea-06e9-4eb3-9a03-c4d5b6430f31" Jul 2 00:22:39.943113 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:22:39.969461 kubelet[2576]: E0702 00:22:39.969355 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:40.970268 kubelet[2576]: E0702 00:22:40.970192 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:41.948828 kubelet[2576]: E0702 00:22:41.948679 2576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:41.971230 kubelet[2576]: E0702 00:22:41.970921 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:42.972177 kubelet[2576]: E0702 00:22:42.972061 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:43.595236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137069854.mount: Deactivated successfully. Jul 2 00:22:43.654225 containerd[2082]: time="2024-07-02T00:22:43.654170286Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.655665 containerd[2082]: time="2024-07-02T00:22:43.655513966Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Jul 2 00:22:43.658386 containerd[2082]: time="2024-07-02T00:22:43.657190801Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.660680 containerd[2082]: time="2024-07-02T00:22:43.659875369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:43.660680 containerd[2082]: time="2024-07-02T00:22:43.660539485Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 6.487425825s" Jul 2 00:22:43.660680 containerd[2082]: time="2024-07-02T00:22:43.660580773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Jul 2 00:22:43.697613 containerd[2082]: time="2024-07-02T00:22:43.697555583Z" level=info msg="CreateContainer within sandbox \"ec51c834d6d68af8cc8a29980b3581d31092aa132c1824287b71f87283041188\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 2 00:22:43.721022 containerd[2082]: time="2024-07-02T00:22:43.720968639Z" level=info msg="CreateContainer within sandbox \"ec51c834d6d68af8cc8a29980b3581d31092aa132c1824287b71f87283041188\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a9e44d4a2a1921c7ee3c8f95a2b099e4ed547d49528e34a1c3e6b3931dff795c\"" Jul 2 00:22:43.721880 containerd[2082]: time="2024-07-02T00:22:43.721838923Z" level=info msg="StartContainer for \"a9e44d4a2a1921c7ee3c8f95a2b099e4ed547d49528e34a1c3e6b3931dff795c\"" Jul 2 00:22:43.845033 containerd[2082]: time="2024-07-02T00:22:43.844980051Z" level=info msg="StartContainer for \"a9e44d4a2a1921c7ee3c8f95a2b099e4ed547d49528e34a1c3e6b3931dff795c\" returns successfully" Jul 2 00:22:43.972716 kubelet[2576]: E0702 00:22:43.972608 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:44.032150 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 2 00:22:44.032602 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 2 00:22:44.233616 kubelet[2576]: I0702 00:22:44.233464 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-w9lnf" podStartSLOduration=3.18239107 podCreationTimestamp="2024-07-02 00:22:23 +0000 UTC" firstStartedPulling="2024-07-02 00:22:25.610293556 +0000 UTC m=+3.969495324" lastFinishedPulling="2024-07-02 00:22:43.661324294 +0000 UTC m=+22.020526064" observedRunningTime="2024-07-02 00:22:44.233399834 +0000 UTC m=+22.592601624" watchObservedRunningTime="2024-07-02 00:22:44.23342181 +0000 UTC m=+22.592623596" Jul 2 00:22:44.973354 kubelet[2576]: E0702 00:22:44.973304 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:45.975593 kubelet[2576]: E0702 00:22:45.974113 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:46.014523 kernel: Initializing XFRM netlink socket Jul 2 00:22:46.199103 (udev-worker)[3174]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:22:46.200426 systemd-networkd[1655]: vxlan.calico: Link UP Jul 2 00:22:46.200432 systemd-networkd[1655]: vxlan.calico: Gained carrier Jul 2 00:22:46.231453 (udev-worker)[3349]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:22:46.234761 (udev-worker)[3348]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:22:46.974735 kubelet[2576]: E0702 00:22:46.974679 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:47.664298 systemd-networkd[1655]: vxlan.calico: Gained IPv6LL Jul 2 00:22:47.975581 kubelet[2576]: E0702 00:22:47.975455 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:48.976385 kubelet[2576]: E0702 00:22:48.976327 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:49.073412 kubelet[2576]: I0702 00:22:49.072401 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:22:49.796743 ntpd[2038]: Listen normally on 6 vxlan.calico 192.168.38.192:123 Jul 2 00:22:49.796990 ntpd[2038]: Listen normally on 7 vxlan.calico [fe80::6410:e4ff:fed1:f337%3]:123 Jul 2 00:22:49.797421 ntpd[2038]: 2 Jul 00:22:49 ntpd[2038]: Listen normally on 6 vxlan.calico 192.168.38.192:123 Jul 2 00:22:49.797421 ntpd[2038]: 2 Jul 00:22:49 ntpd[2038]: Listen normally on 7 vxlan.calico [fe80::6410:e4ff:fed1:f337%3]:123 Jul 2 00:22:49.976726 kubelet[2576]: E0702 00:22:49.976673 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:50.977395 kubelet[2576]: E0702 00:22:50.977346 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:51.862675 kubelet[2576]: I0702 00:22:51.862636 2576 topology_manager.go:215] "Topology Admit Handler" podUID="27b38050-e382-467c-b047-becd5250617a" podNamespace="calico-apiserver" podName="calico-apiserver-558c446b65-qmr97" Jul 2 00:22:51.882844 kubelet[2576]: I0702 00:22:51.882775 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/27b38050-e382-467c-b047-becd5250617a-calico-apiserver-certs\") pod \"calico-apiserver-558c446b65-qmr97\" (UID: \"27b38050-e382-467c-b047-becd5250617a\") " pod="calico-apiserver/calico-apiserver-558c446b65-qmr97" Jul 2 00:22:51.883091 kubelet[2576]: I0702 00:22:51.883052 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xsz2p\" (UniqueName: \"kubernetes.io/projected/27b38050-e382-467c-b047-becd5250617a-kube-api-access-xsz2p\") pod \"calico-apiserver-558c446b65-qmr97\" (UID: \"27b38050-e382-467c-b047-becd5250617a\") " pod="calico-apiserver/calico-apiserver-558c446b65-qmr97" Jul 2 00:22:51.977663 kubelet[2576]: E0702 00:22:51.977607 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:51.984582 kubelet[2576]: E0702 00:22:51.984349 2576 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jul 2 00:22:51.984582 kubelet[2576]: E0702 00:22:51.984455 2576 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/27b38050-e382-467c-b047-becd5250617a-calico-apiserver-certs podName:27b38050-e382-467c-b047-becd5250617a nodeName:}" failed. No retries permitted until 2024-07-02 00:22:52.484417364 +0000 UTC m=+30.843619144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/27b38050-e382-467c-b047-becd5250617a-calico-apiserver-certs") pod "calico-apiserver-558c446b65-qmr97" (UID: "27b38050-e382-467c-b047-becd5250617a") : secret "calico-apiserver-certs" not found Jul 2 00:22:52.768774 containerd[2082]: time="2024-07-02T00:22:52.768733288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558c446b65-qmr97,Uid:27b38050-e382-467c-b047-becd5250617a,Namespace:calico-apiserver,Attempt:0,}" Jul 2 00:22:52.978779 kubelet[2576]: E0702 00:22:52.978017 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:53.098304 containerd[2082]: time="2024-07-02T00:22:53.098172446Z" level=info msg="StopPodSandbox for \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\"" Jul 2 00:22:53.172830 (udev-worker)[3480]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:22:53.182156 systemd-networkd[1655]: cali92244669de7: Link UP Jul 2 00:22:53.184178 systemd-networkd[1655]: cali92244669de7: Gained carrier Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:52.932 [INFO][3445] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0 calico-apiserver-558c446b65- calico-apiserver 27b38050-e382-467c-b047-becd5250617a 1085 0 2024-07-02 00:22:51 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:558c446b65 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 172.31.26.26 calico-apiserver-558c446b65-qmr97 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali92244669de7 [] []}} ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Namespace="calico-apiserver" Pod="calico-apiserver-558c446b65-qmr97" WorkloadEndpoint="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:52.933 [INFO][3445] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Namespace="calico-apiserver" Pod="calico-apiserver-558c446b65-qmr97" WorkloadEndpoint="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.038 [INFO][3456] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" HandleID="k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Workload="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.070 [INFO][3456] ipam_plugin.go 264: Auto assigning IP ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" HandleID="k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Workload="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030a220), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"172.31.26.26", "pod":"calico-apiserver-558c446b65-qmr97", "timestamp":"2024-07-02 00:22:53.038645368 +0000 UTC"}, Hostname:"172.31.26.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.070 [INFO][3456] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.070 [INFO][3456] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.070 [INFO][3456] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.26' Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.074 [INFO][3456] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.083 [INFO][3456] ipam.go 372: Looking up existing affinities for host host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.100 [INFO][3456] ipam.go 489: Trying affinity for 192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.106 [INFO][3456] ipam.go 155: Attempting to load block cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.114 [INFO][3456] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.115 [INFO][3456] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.119 [INFO][3456] ipam.go 1685: Creating new handle: k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.130 [INFO][3456] ipam.go 1203: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.149 [INFO][3456] ipam.go 1216: Successfully claimed IPs: [192.168.38.193/26] block=192.168.38.192/26 handle="k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.150 [INFO][3456] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.193/26] handle="k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" host="172.31.26.26" Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.150 [INFO][3456] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:53.215270 containerd[2082]: 2024-07-02 00:22:53.151 [INFO][3456] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.38.193/26] IPv6=[] ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" HandleID="k8s-pod-network.f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Workload="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" Jul 2 00:22:53.217075 containerd[2082]: 2024-07-02 00:22:53.155 [INFO][3445] k8s.go 386: Populated endpoint ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Namespace="calico-apiserver" Pod="calico-apiserver-558c446b65-qmr97" WorkloadEndpoint="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0", GenerateName:"calico-apiserver-558c446b65-", Namespace:"calico-apiserver", SelfLink:"", UID:"27b38050-e382-467c-b047-becd5250617a", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558c446b65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"", Pod:"calico-apiserver-558c446b65-qmr97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92244669de7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:53.217075 containerd[2082]: 2024-07-02 00:22:53.155 [INFO][3445] k8s.go 387: Calico CNI using IPs: [192.168.38.193/32] ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Namespace="calico-apiserver" Pod="calico-apiserver-558c446b65-qmr97" WorkloadEndpoint="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" Jul 2 00:22:53.217075 containerd[2082]: 2024-07-02 00:22:53.155 [INFO][3445] dataplane_linux.go 68: Setting the host side veth name to cali92244669de7 ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Namespace="calico-apiserver" Pod="calico-apiserver-558c446b65-qmr97" WorkloadEndpoint="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" Jul 2 00:22:53.217075 containerd[2082]: 2024-07-02 00:22:53.183 [INFO][3445] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Namespace="calico-apiserver" Pod="calico-apiserver-558c446b65-qmr97" WorkloadEndpoint="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" Jul 2 00:22:53.217075 containerd[2082]: 2024-07-02 00:22:53.185 [INFO][3445] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Namespace="calico-apiserver" Pod="calico-apiserver-558c446b65-qmr97" WorkloadEndpoint="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0", GenerateName:"calico-apiserver-558c446b65-", Namespace:"calico-apiserver", SelfLink:"", UID:"27b38050-e382-467c-b047-becd5250617a", ResourceVersion:"1085", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"558c446b65", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb", Pod:"calico-apiserver-558c446b65-qmr97", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali92244669de7", MAC:"aa:4e:8b:c0:44:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:53.217075 containerd[2082]: 2024-07-02 00:22:53.208 [INFO][3445] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb" Namespace="calico-apiserver" Pod="calico-apiserver-558c446b65-qmr97" WorkloadEndpoint="172.31.26.26-k8s-calico--apiserver--558c446b65--qmr97-eth0" Jul 2 00:22:53.304900 containerd[2082]: time="2024-07-02T00:22:53.304704044Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:53.304900 containerd[2082]: time="2024-07-02T00:22:53.304767955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.304900 containerd[2082]: time="2024-07-02T00:22:53.304791247Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:53.304900 containerd[2082]: time="2024-07-02T00:22:53.304805977Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.235 [INFO][3475] k8s.go 608: Cleaning up netns ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.236 [INFO][3475] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" iface="eth0" netns="/var/run/netns/cni-4b409b82-83f1-ad86-f3ea-85c93918d8fa" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.236 [INFO][3475] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" iface="eth0" netns="/var/run/netns/cni-4b409b82-83f1-ad86-f3ea-85c93918d8fa" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.236 [INFO][3475] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" iface="eth0" netns="/var/run/netns/cni-4b409b82-83f1-ad86-f3ea-85c93918d8fa" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.236 [INFO][3475] k8s.go 615: Releasing IP address(es) ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.236 [INFO][3475] utils.go 188: Calico CNI releasing IP address ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.297 [INFO][3497] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.297 [INFO][3497] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.297 [INFO][3497] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.309 [WARNING][3497] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.309 [INFO][3497] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.369 [INFO][3497] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:53.388018 containerd[2082]: 2024-07-02 00:22:53.382 [INFO][3475] k8s.go 621: Teardown processing complete. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:22:53.391947 containerd[2082]: time="2024-07-02T00:22:53.391525330Z" level=info msg="TearDown network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\" successfully" Jul 2 00:22:53.391947 containerd[2082]: time="2024-07-02T00:22:53.391712931Z" level=info msg="StopPodSandbox for \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\" returns successfully" Jul 2 00:22:53.398577 containerd[2082]: time="2024-07-02T00:22:53.398412043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rwhhv,Uid:86df1bea-06e9-4eb3-9a03-c4d5b6430f31,Namespace:default,Attempt:1,}" Jul 2 00:22:53.399569 systemd[1]: run-netns-cni\x2d4b409b82\x2d83f1\x2dad86\x2df3ea\x2d85c93918d8fa.mount: Deactivated successfully. Jul 2 00:22:53.462090 containerd[2082]: time="2024-07-02T00:22:53.462032862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-558c446b65-qmr97,Uid:27b38050-e382-467c-b047-becd5250617a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb\"" Jul 2 00:22:53.472376 containerd[2082]: time="2024-07-02T00:22:53.472114618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jul 2 00:22:53.665173 (udev-worker)[3484]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:22:53.672121 systemd-networkd[1655]: cali3c2847fc1a1: Link UP Jul 2 00:22:53.672955 systemd-networkd[1655]: cali3c2847fc1a1: Gained carrier Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.523 [INFO][3545] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0 nginx-deployment-6d5f899847- default 86df1bea-06e9-4eb3-9a03-c4d5b6430f31 1100 0 2024-07-02 00:22:38 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.26 nginx-deployment-6d5f899847-rwhhv eth0 default [] [] [kns.default ksa.default.default] cali3c2847fc1a1 [] []}} ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Namespace="default" Pod="nginx-deployment-6d5f899847-rwhhv" WorkloadEndpoint="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.523 [INFO][3545] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Namespace="default" Pod="nginx-deployment-6d5f899847-rwhhv" WorkloadEndpoint="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.567 [INFO][3558] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" HandleID="k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.585 [INFO][3558] ipam_plugin.go 264: Auto assigning IP ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" HandleID="k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000265e20), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.26", "pod":"nginx-deployment-6d5f899847-rwhhv", "timestamp":"2024-07-02 00:22:53.566959698 +0000 UTC"}, Hostname:"172.31.26.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.586 [INFO][3558] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.586 [INFO][3558] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.586 [INFO][3558] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.26' Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.589 [INFO][3558] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.602 [INFO][3558] ipam.go 372: Looking up existing affinities for host host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.615 [INFO][3558] ipam.go 489: Trying affinity for 192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.620 [INFO][3558] ipam.go 155: Attempting to load block cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.625 [INFO][3558] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.625 [INFO][3558] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.628 [INFO][3558] ipam.go 1685: Creating new handle: k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.635 [INFO][3558] ipam.go 1203: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.657 [INFO][3558] ipam.go 1216: Successfully claimed IPs: [192.168.38.194/26] block=192.168.38.192/26 handle="k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.658 [INFO][3558] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.194/26] handle="k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" host="172.31.26.26" Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.658 [INFO][3558] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:53.708898 containerd[2082]: 2024-07-02 00:22:53.658 [INFO][3558] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.38.194/26] IPv6=[] ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" HandleID="k8s-pod-network.40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.709950 containerd[2082]: 2024-07-02 00:22:53.660 [INFO][3545] k8s.go 386: Populated endpoint ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Namespace="default" Pod="nginx-deployment-6d5f899847-rwhhv" WorkloadEndpoint="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"86df1bea-06e9-4eb3-9a03-c4d5b6430f31", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"", Pod:"nginx-deployment-6d5f899847-rwhhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3c2847fc1a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:53.709950 containerd[2082]: 2024-07-02 00:22:53.660 [INFO][3545] k8s.go 387: Calico CNI using IPs: [192.168.38.194/32] ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Namespace="default" Pod="nginx-deployment-6d5f899847-rwhhv" WorkloadEndpoint="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.709950 containerd[2082]: 2024-07-02 00:22:53.660 [INFO][3545] dataplane_linux.go 68: Setting the host side veth name to cali3c2847fc1a1 ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Namespace="default" Pod="nginx-deployment-6d5f899847-rwhhv" WorkloadEndpoint="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.709950 containerd[2082]: 2024-07-02 00:22:53.665 [INFO][3545] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Namespace="default" Pod="nginx-deployment-6d5f899847-rwhhv" WorkloadEndpoint="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.709950 containerd[2082]: 2024-07-02 00:22:53.666 [INFO][3545] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Namespace="default" Pod="nginx-deployment-6d5f899847-rwhhv" WorkloadEndpoint="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"86df1bea-06e9-4eb3-9a03-c4d5b6430f31", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db", Pod:"nginx-deployment-6d5f899847-rwhhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3c2847fc1a1", MAC:"92:fd:e5:2a:31:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:53.709950 containerd[2082]: 2024-07-02 00:22:53.705 [INFO][3545] k8s.go 500: Wrote updated endpoint to datastore ContainerID="40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db" Namespace="default" Pod="nginx-deployment-6d5f899847-rwhhv" WorkloadEndpoint="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:22:53.749348 containerd[2082]: time="2024-07-02T00:22:53.749234589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:53.749348 containerd[2082]: time="2024-07-02T00:22:53.749300669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.749348 containerd[2082]: time="2024-07-02T00:22:53.749323835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:53.749924 containerd[2082]: time="2024-07-02T00:22:53.749735786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:53.860692 containerd[2082]: time="2024-07-02T00:22:53.860650643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-rwhhv,Uid:86df1bea-06e9-4eb3-9a03-c4d5b6430f31,Namespace:default,Attempt:1,} returns sandbox id \"40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db\"" Jul 2 00:22:53.978874 kubelet[2576]: E0702 00:22:53.978726 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:54.421938 update_engine[2063]: I0702 00:22:54.421741 2063 update_attempter.cc:509] Updating boot flags... Jul 2 00:22:54.527685 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3629) Jul 2 00:22:54.896510 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 35 scanned by (udev-worker) (3629) Jul 2 00:22:54.898821 systemd-networkd[1655]: cali92244669de7: Gained IPv6LL Jul 2 00:22:54.899720 systemd-networkd[1655]: cali3c2847fc1a1: Gained IPv6LL Jul 2 00:22:54.981527 kubelet[2576]: E0702 00:22:54.980013 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:55.098291 containerd[2082]: time="2024-07-02T00:22:55.098237032Z" level=info msg="StopPodSandbox for \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\"" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.305 [INFO][3810] k8s.go 608: Cleaning up netns ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.306 [INFO][3810] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" iface="eth0" netns="/var/run/netns/cni-9e40fdf0-f4ab-8be9-6ffe-e9ba6eec36f8" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.308 [INFO][3810] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" iface="eth0" netns="/var/run/netns/cni-9e40fdf0-f4ab-8be9-6ffe-e9ba6eec36f8" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.308 [INFO][3810] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" iface="eth0" netns="/var/run/netns/cni-9e40fdf0-f4ab-8be9-6ffe-e9ba6eec36f8" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.308 [INFO][3810] k8s.go 615: Releasing IP address(es) ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.308 [INFO][3810] utils.go 188: Calico CNI releasing IP address ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.373 [INFO][3820] ipam_plugin.go 411: Releasing address using handleID ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.374 [INFO][3820] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.374 [INFO][3820] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.395 [WARNING][3820] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.395 [INFO][3820] ipam_plugin.go 439: Releasing address using workloadID ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.401 [INFO][3820] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:55.406140 containerd[2082]: 2024-07-02 00:22:55.403 [INFO][3810] k8s.go 621: Teardown processing complete. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:22:55.407938 containerd[2082]: time="2024-07-02T00:22:55.407891346Z" level=info msg="TearDown network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\" successfully" Jul 2 00:22:55.408072 containerd[2082]: time="2024-07-02T00:22:55.408052135Z" level=info msg="StopPodSandbox for \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\" returns successfully" Jul 2 00:22:55.412129 containerd[2082]: time="2024-07-02T00:22:55.411829966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x5dxg,Uid:5b4fb344-60af-4260-b6c2-41ad84a8e2e0,Namespace:calico-system,Attempt:1,}" Jul 2 00:22:55.412880 systemd[1]: run-netns-cni\x2d9e40fdf0\x2df4ab\x2d8be9\x2d6ffe\x2de9ba6eec36f8.mount: Deactivated successfully. Jul 2 00:22:55.703154 systemd-networkd[1655]: cali1456a048a03: Link UP Jul 2 00:22:55.704304 systemd-networkd[1655]: cali1456a048a03: Gained carrier Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.537 [INFO][3829] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.26-k8s-csi--node--driver--x5dxg-eth0 csi-node-driver- calico-system 5b4fb344-60af-4260-b6c2-41ad84a8e2e0 1117 0 2024-07-02 00:22:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.26 csi-node-driver-x5dxg eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali1456a048a03 [] []}} ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Namespace="calico-system" Pod="csi-node-driver-x5dxg" WorkloadEndpoint="172.31.26.26-k8s-csi--node--driver--x5dxg-" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.537 [INFO][3829] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Namespace="calico-system" Pod="csi-node-driver-x5dxg" WorkloadEndpoint="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.602 [INFO][3841] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" HandleID="k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.617 [INFO][3841] ipam_plugin.go 264: Auto assigning IP ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" HandleID="k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edcc0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.26.26", "pod":"csi-node-driver-x5dxg", "timestamp":"2024-07-02 00:22:55.602545942 +0000 UTC"}, Hostname:"172.31.26.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.617 [INFO][3841] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.617 [INFO][3841] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.617 [INFO][3841] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.26' Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.622 [INFO][3841] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.637 [INFO][3841] ipam.go 372: Looking up existing affinities for host host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.666 [INFO][3841] ipam.go 489: Trying affinity for 192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.669 [INFO][3841] ipam.go 155: Attempting to load block cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.673 [INFO][3841] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.673 [INFO][3841] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.676 [INFO][3841] ipam.go 1685: Creating new handle: k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.681 [INFO][3841] ipam.go 1203: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.689 [INFO][3841] ipam.go 1216: Successfully claimed IPs: [192.168.38.195/26] block=192.168.38.192/26 handle="k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.689 [INFO][3841] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.195/26] handle="k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" host="172.31.26.26" Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.689 [INFO][3841] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:22:55.743227 containerd[2082]: 2024-07-02 00:22:55.689 [INFO][3841] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.38.195/26] IPv6=[] ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" HandleID="k8s-pod-network.2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.745216 containerd[2082]: 2024-07-02 00:22:55.695 [INFO][3829] k8s.go 386: Populated endpoint ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Namespace="calico-system" Pod="csi-node-driver-x5dxg" WorkloadEndpoint="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-csi--node--driver--x5dxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b4fb344-60af-4260-b6c2-41ad84a8e2e0", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"", Pod:"csi-node-driver-x5dxg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1456a048a03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:55.745216 containerd[2082]: 2024-07-02 00:22:55.695 [INFO][3829] k8s.go 387: Calico CNI using IPs: [192.168.38.195/32] ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Namespace="calico-system" Pod="csi-node-driver-x5dxg" WorkloadEndpoint="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.745216 containerd[2082]: 2024-07-02 00:22:55.695 [INFO][3829] dataplane_linux.go 68: Setting the host side veth name to cali1456a048a03 ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Namespace="calico-system" Pod="csi-node-driver-x5dxg" WorkloadEndpoint="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.745216 containerd[2082]: 2024-07-02 00:22:55.703 [INFO][3829] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Namespace="calico-system" Pod="csi-node-driver-x5dxg" WorkloadEndpoint="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.745216 containerd[2082]: 2024-07-02 00:22:55.705 [INFO][3829] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Namespace="calico-system" Pod="csi-node-driver-x5dxg" WorkloadEndpoint="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-csi--node--driver--x5dxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b4fb344-60af-4260-b6c2-41ad84a8e2e0", ResourceVersion:"1117", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d", Pod:"csi-node-driver-x5dxg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1456a048a03", MAC:"26:4e:cd:e0:75:41", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:22:55.745216 containerd[2082]: 2024-07-02 00:22:55.740 [INFO][3829] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d" Namespace="calico-system" Pod="csi-node-driver-x5dxg" WorkloadEndpoint="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:22:55.806081 containerd[2082]: time="2024-07-02T00:22:55.805988471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:22:55.806373 containerd[2082]: time="2024-07-02T00:22:55.806063457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:55.806373 containerd[2082]: time="2024-07-02T00:22:55.806094826Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:22:55.806373 containerd[2082]: time="2024-07-02T00:22:55.806113955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:22:55.940242 containerd[2082]: time="2024-07-02T00:22:55.938911503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x5dxg,Uid:5b4fb344-60af-4260-b6c2-41ad84a8e2e0,Namespace:calico-system,Attempt:1,} returns sandbox id \"2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d\"" Jul 2 00:22:55.980464 kubelet[2576]: E0702 00:22:55.980346 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:56.720578 containerd[2082]: time="2024-07-02T00:22:56.720527465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:56.722250 containerd[2082]: time="2024-07-02T00:22:56.722100941Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Jul 2 00:22:56.725186 containerd[2082]: time="2024-07-02T00:22:56.724566373Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:56.729295 containerd[2082]: time="2024-07-02T00:22:56.729236871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:22:56.732730 containerd[2082]: time="2024-07-02T00:22:56.732634466Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.260458579s" Jul 2 00:22:56.732945 containerd[2082]: time="2024-07-02T00:22:56.732734632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Jul 2 00:22:56.735499 containerd[2082]: time="2024-07-02T00:22:56.734924294Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 00:22:56.739647 containerd[2082]: time="2024-07-02T00:22:56.738686811Z" level=info msg="CreateContainer within sandbox \"f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 2 00:22:56.760033 containerd[2082]: time="2024-07-02T00:22:56.759983491Z" level=info msg="CreateContainer within sandbox \"f905a2b223803574aa8310dd76688db4da2deb9b6f721e790fcc071b027616fb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3ddc627d4863a87507d9fd86b73dfa33cb2a6185ed931b37086f790dd26d5701\"" Jul 2 00:22:56.760884 containerd[2082]: time="2024-07-02T00:22:56.760843843Z" level=info msg="StartContainer for \"3ddc627d4863a87507d9fd86b73dfa33cb2a6185ed931b37086f790dd26d5701\"" Jul 2 00:22:56.858209 containerd[2082]: time="2024-07-02T00:22:56.858153874Z" level=info msg="StartContainer for \"3ddc627d4863a87507d9fd86b73dfa33cb2a6185ed931b37086f790dd26d5701\" returns successfully" Jul 2 00:22:56.981873 kubelet[2576]: E0702 00:22:56.980802 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:57.290723 kubelet[2576]: I0702 00:22:57.290591 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-558c446b65-qmr97" podStartSLOduration=3.028485734 podCreationTimestamp="2024-07-02 00:22:51 +0000 UTC" firstStartedPulling="2024-07-02 00:22:53.471759942 +0000 UTC m=+31.830961712" lastFinishedPulling="2024-07-02 00:22:56.73382609 +0000 UTC m=+35.093027872" observedRunningTime="2024-07-02 00:22:57.290478775 +0000 UTC m=+35.649680565" watchObservedRunningTime="2024-07-02 00:22:57.290551894 +0000 UTC m=+35.649753679" Jul 2 00:22:57.455697 systemd-networkd[1655]: cali1456a048a03: Gained IPv6LL Jul 2 00:22:57.981975 kubelet[2576]: E0702 00:22:57.981929 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:58.266342 kubelet[2576]: I0702 00:22:58.266096 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:22:58.982601 kubelet[2576]: E0702 00:22:58.982469 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:22:59.797077 ntpd[2038]: Listen normally on 8 cali92244669de7 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 2 00:22:59.797170 ntpd[2038]: Listen normally on 9 cali3c2847fc1a1 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:22:59.798075 ntpd[2038]: 2 Jul 00:22:59 ntpd[2038]: Listen normally on 8 cali92244669de7 [fe80::ecee:eeff:feee:eeee%6]:123 Jul 2 00:22:59.798075 ntpd[2038]: 2 Jul 00:22:59 ntpd[2038]: Listen normally on 9 cali3c2847fc1a1 [fe80::ecee:eeff:feee:eeee%7]:123 Jul 2 00:22:59.798075 ntpd[2038]: 2 Jul 00:22:59 ntpd[2038]: Listen normally on 10 cali1456a048a03 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:22:59.797213 ntpd[2038]: Listen normally on 10 cali1456a048a03 [fe80::ecee:eeff:feee:eeee%8]:123 Jul 2 00:22:59.883609 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1232163923.mount: Deactivated successfully. Jul 2 00:22:59.982887 kubelet[2576]: E0702 00:22:59.982842 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:00.983115 kubelet[2576]: E0702 00:23:00.983056 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:01.951276 kubelet[2576]: E0702 00:23:01.951232 2576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:01.983598 kubelet[2576]: E0702 00:23:01.983552 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:02.844801 containerd[2082]: time="2024-07-02T00:23:02.844743751Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:02.846646 containerd[2082]: time="2024-07-02T00:23:02.846580684Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71000000" Jul 2 00:23:02.848407 containerd[2082]: time="2024-07-02T00:23:02.848362601Z" level=info msg="ImageCreate event name:\"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:02.851662 containerd[2082]: time="2024-07-02T00:23:02.851608029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:02.858585 containerd[2082]: time="2024-07-02T00:23:02.858534610Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\", size \"70999878\" in 6.123557467s" Jul 2 00:23:02.858902 containerd[2082]: time="2024-07-02T00:23:02.858787501Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 00:23:02.861886 containerd[2082]: time="2024-07-02T00:23:02.860588593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jul 2 00:23:02.862174 containerd[2082]: time="2024-07-02T00:23:02.862140311Z" level=info msg="CreateContainer within sandbox \"40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 2 00:23:02.890798 containerd[2082]: time="2024-07-02T00:23:02.888342485Z" level=info msg="CreateContainer within sandbox \"40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5755c16b63eb247a13c5b9d8bcf6e44e119e441285d6a377284ba35cf99f9b9b\"" Jul 2 00:23:02.889309 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153458989.mount: Deactivated successfully. Jul 2 00:23:02.893835 containerd[2082]: time="2024-07-02T00:23:02.893794673Z" level=info msg="StartContainer for \"5755c16b63eb247a13c5b9d8bcf6e44e119e441285d6a377284ba35cf99f9b9b\"" Jul 2 00:23:02.985586 kubelet[2576]: E0702 00:23:02.985516 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:03.069470 containerd[2082]: time="2024-07-02T00:23:03.069199923Z" level=info msg="StartContainer for \"5755c16b63eb247a13c5b9d8bcf6e44e119e441285d6a377284ba35cf99f9b9b\" returns successfully" Jul 2 00:23:03.988623 kubelet[2576]: E0702 00:23:03.985687 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:04.418509 containerd[2082]: time="2024-07-02T00:23:04.418446393Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:04.420259 containerd[2082]: time="2024-07-02T00:23:04.420198090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Jul 2 00:23:04.429600 containerd[2082]: time="2024-07-02T00:23:04.422233132Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:04.441115 containerd[2082]: time="2024-07-02T00:23:04.441060734Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:04.444571 containerd[2082]: time="2024-07-02T00:23:04.444520855Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.583891885s" Jul 2 00:23:04.444571 containerd[2082]: time="2024-07-02T00:23:04.444574156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Jul 2 00:23:04.457330 containerd[2082]: time="2024-07-02T00:23:04.457144517Z" level=info msg="CreateContainer within sandbox \"2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 2 00:23:04.503135 containerd[2082]: time="2024-07-02T00:23:04.503079528Z" level=info msg="CreateContainer within sandbox \"2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0c06b0968d546024efb3226b99d5bb6c3f835b56dc80ca303417404fba3755d1\"" Jul 2 00:23:04.504024 containerd[2082]: time="2024-07-02T00:23:04.503983942Z" level=info msg="StartContainer for \"0c06b0968d546024efb3226b99d5bb6c3f835b56dc80ca303417404fba3755d1\"" Jul 2 00:23:04.607715 containerd[2082]: time="2024-07-02T00:23:04.607529078Z" level=info msg="StartContainer for \"0c06b0968d546024efb3226b99d5bb6c3f835b56dc80ca303417404fba3755d1\" returns successfully" Jul 2 00:23:04.610702 containerd[2082]: time="2024-07-02T00:23:04.610659508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jul 2 00:23:04.986442 kubelet[2576]: E0702 00:23:04.986387 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:05.987700 kubelet[2576]: E0702 00:23:05.987582 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:06.137631 containerd[2082]: time="2024-07-02T00:23:06.137579431Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:06.147406 containerd[2082]: time="2024-07-02T00:23:06.147291128Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Jul 2 00:23:06.148704 containerd[2082]: time="2024-07-02T00:23:06.148664171Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:06.151908 containerd[2082]: time="2024-07-02T00:23:06.151843352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:06.153755 containerd[2082]: time="2024-07-02T00:23:06.153140007Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.542431795s" Jul 2 00:23:06.153755 containerd[2082]: time="2024-07-02T00:23:06.153186230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Jul 2 00:23:06.155282 containerd[2082]: time="2024-07-02T00:23:06.155242236Z" level=info msg="CreateContainer within sandbox \"2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 2 00:23:06.186502 containerd[2082]: time="2024-07-02T00:23:06.186442444Z" level=info msg="CreateContainer within sandbox \"2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2848bada7b843cec347e0394bf976ad308966ae12b2938a83df29a882024c23d\"" Jul 2 00:23:06.187371 containerd[2082]: time="2024-07-02T00:23:06.187327879Z" level=info msg="StartContainer for \"2848bada7b843cec347e0394bf976ad308966ae12b2938a83df29a882024c23d\"" Jul 2 00:23:06.243960 systemd[1]: run-containerd-runc-k8s.io-2848bada7b843cec347e0394bf976ad308966ae12b2938a83df29a882024c23d-runc.XF7zsp.mount: Deactivated successfully. Jul 2 00:23:06.280904 containerd[2082]: time="2024-07-02T00:23:06.280850786Z" level=info msg="StartContainer for \"2848bada7b843cec347e0394bf976ad308966ae12b2938a83df29a882024c23d\" returns successfully" Jul 2 00:23:06.340026 kubelet[2576]: I0702 00:23:06.339986 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-x5dxg" podStartSLOduration=33.128727575 podCreationTimestamp="2024-07-02 00:22:23 +0000 UTC" firstStartedPulling="2024-07-02 00:22:55.942191079 +0000 UTC m=+34.301392854" lastFinishedPulling="2024-07-02 00:23:06.153404411 +0000 UTC m=+44.512606192" observedRunningTime="2024-07-02 00:23:06.337638346 +0000 UTC m=+44.696840135" watchObservedRunningTime="2024-07-02 00:23:06.339940913 +0000 UTC m=+44.699142701" Jul 2 00:23:06.340264 kubelet[2576]: I0702 00:23:06.340146 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-rwhhv" podStartSLOduration=19.342340538 podCreationTimestamp="2024-07-02 00:22:38 +0000 UTC" firstStartedPulling="2024-07-02 00:22:53.861918832 +0000 UTC m=+32.221120599" lastFinishedPulling="2024-07-02 00:23:02.859697965 +0000 UTC m=+41.218899764" observedRunningTime="2024-07-02 00:23:03.366535584 +0000 UTC m=+41.725737373" watchObservedRunningTime="2024-07-02 00:23:06.340119703 +0000 UTC m=+44.699321491" Jul 2 00:23:06.988491 kubelet[2576]: E0702 00:23:06.988420 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:07.116413 kubelet[2576]: I0702 00:23:07.116376 2576 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 2 00:23:07.116413 kubelet[2576]: I0702 00:23:07.116413 2576 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 2 00:23:07.989216 kubelet[2576]: E0702 00:23:07.989161 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:08.990122 kubelet[2576]: E0702 00:23:08.990058 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:09.640339 kubelet[2576]: I0702 00:23:09.640290 2576 topology_manager.go:215] "Topology Admit Handler" podUID="06b31b44-8740-4d79-b08c-a13ec1e32508" podNamespace="default" podName="nfs-server-provisioner-0" Jul 2 00:23:09.763151 kubelet[2576]: I0702 00:23:09.762896 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/06b31b44-8740-4d79-b08c-a13ec1e32508-data\") pod \"nfs-server-provisioner-0\" (UID: \"06b31b44-8740-4d79-b08c-a13ec1e32508\") " pod="default/nfs-server-provisioner-0" Jul 2 00:23:09.763151 kubelet[2576]: I0702 00:23:09.763100 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjjb7\" (UniqueName: \"kubernetes.io/projected/06b31b44-8740-4d79-b08c-a13ec1e32508-kube-api-access-kjjb7\") pod \"nfs-server-provisioner-0\" (UID: \"06b31b44-8740-4d79-b08c-a13ec1e32508\") " pod="default/nfs-server-provisioner-0" Jul 2 00:23:09.946763 containerd[2082]: time="2024-07-02T00:23:09.946640589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:06b31b44-8740-4d79-b08c-a13ec1e32508,Namespace:default,Attempt:0,}" Jul 2 00:23:09.991095 kubelet[2576]: E0702 00:23:09.991049 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:10.222881 systemd-networkd[1655]: cali60e51b789ff: Link UP Jul 2 00:23:10.223187 systemd-networkd[1655]: cali60e51b789ff: Gained carrier Jul 2 00:23:10.232102 (udev-worker)[4159]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.033 [INFO][4140] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.26-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 06b31b44-8740-4d79-b08c-a13ec1e32508 1209 0 2024-07-02 00:23:09 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 172.31.26.26 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.26-k8s-nfs--server--provisioner--0-" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.033 [INFO][4140] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.087 [INFO][4151] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" HandleID="k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Workload="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.111 [INFO][4151] ipam_plugin.go 264: Auto assigning IP ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" HandleID="k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Workload="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051270), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.26", "pod":"nfs-server-provisioner-0", "timestamp":"2024-07-02 00:23:10.087475011 +0000 UTC"}, Hostname:"172.31.26.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.111 [INFO][4151] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.111 [INFO][4151] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.111 [INFO][4151] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.26' Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.114 [INFO][4151] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.121 [INFO][4151] ipam.go 372: Looking up existing affinities for host host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.134 [INFO][4151] ipam.go 489: Trying affinity for 192.168.38.192/26 host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.147 [INFO][4151] ipam.go 155: Attempting to load block cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.157 [INFO][4151] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.157 [INFO][4151] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.161 [INFO][4151] ipam.go 1685: Creating new handle: k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20 Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.188 [INFO][4151] ipam.go 1203: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.210 [INFO][4151] ipam.go 1216: Successfully claimed IPs: [192.168.38.196/26] block=192.168.38.192/26 handle="k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.210 [INFO][4151] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.196/26] handle="k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" host="172.31.26.26" Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.210 [INFO][4151] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:10.266343 containerd[2082]: 2024-07-02 00:23:10.210 [INFO][4151] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.38.196/26] IPv6=[] ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" HandleID="k8s-pod-network.d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Workload="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:23:10.267505 containerd[2082]: 2024-07-02 00:23:10.213 [INFO][4140] k8s.go 386: Populated endpoint ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"06b31b44-8740-4d79-b08c-a13ec1e32508", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:10.267505 containerd[2082]: 2024-07-02 00:23:10.214 [INFO][4140] k8s.go 387: Calico CNI using IPs: [192.168.38.196/32] ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:23:10.267505 containerd[2082]: 2024-07-02 00:23:10.214 [INFO][4140] dataplane_linux.go 68: Setting the host side veth name to cali60e51b789ff ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:23:10.267505 containerd[2082]: 2024-07-02 00:23:10.222 [INFO][4140] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:23:10.267795 containerd[2082]: 2024-07-02 00:23:10.225 [INFO][4140] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"06b31b44-8740-4d79-b08c-a13ec1e32508", ResourceVersion:"1209", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 9, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.38.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"62:26:e6:04:c3:ff", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:10.267795 containerd[2082]: 2024-07-02 00:23:10.263 [INFO][4140] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.26.26-k8s-nfs--server--provisioner--0-eth0" Jul 2 00:23:10.354625 containerd[2082]: time="2024-07-02T00:23:10.352945161Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:10.364507 containerd[2082]: time="2024-07-02T00:23:10.360202529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:10.364507 containerd[2082]: time="2024-07-02T00:23:10.360253253Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:10.364507 containerd[2082]: time="2024-07-02T00:23:10.360269811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:10.467079 containerd[2082]: time="2024-07-02T00:23:10.467032711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:06b31b44-8740-4d79-b08c-a13ec1e32508,Namespace:default,Attempt:0,} returns sandbox id \"d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20\"" Jul 2 00:23:10.469145 containerd[2082]: time="2024-07-02T00:23:10.469092135Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 2 00:23:10.991414 kubelet[2576]: E0702 00:23:10.991359 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:11.792059 systemd-networkd[1655]: cali60e51b789ff: Gained IPv6LL Jul 2 00:23:11.994584 kubelet[2576]: E0702 00:23:11.993219 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:12.994084 kubelet[2576]: E0702 00:23:12.994028 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:13.796865 ntpd[2038]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:23:13.797520 ntpd[2038]: 2 Jul 00:23:13 ntpd[2038]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123 Jul 2 00:23:13.994863 kubelet[2576]: E0702 00:23:13.994222 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:14.994930 kubelet[2576]: E0702 00:23:14.994896 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:15.047866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3545965597.mount: Deactivated successfully. Jul 2 00:23:15.998849 kubelet[2576]: E0702 00:23:15.998723 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:17.000690 kubelet[2576]: E0702 00:23:17.000634 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:18.007233 kubelet[2576]: E0702 00:23:18.007047 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:18.528850 containerd[2082]: time="2024-07-02T00:23:18.528568328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:18.536022 containerd[2082]: time="2024-07-02T00:23:18.535460079Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jul 2 00:23:18.550718 containerd[2082]: time="2024-07-02T00:23:18.550493871Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:18.603904 containerd[2082]: time="2024-07-02T00:23:18.603827914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:18.606298 containerd[2082]: time="2024-07-02T00:23:18.605515888Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 8.136364876s" Jul 2 00:23:18.606298 containerd[2082]: time="2024-07-02T00:23:18.605568332Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 2 00:23:18.622949 containerd[2082]: time="2024-07-02T00:23:18.622898056Z" level=info msg="CreateContainer within sandbox \"d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 2 00:23:18.698933 containerd[2082]: time="2024-07-02T00:23:18.698881421Z" level=info msg="CreateContainer within sandbox \"d95be309c16352d4d79d1f0891ed23cadc515ed36649e52fce8544ca2ba2af20\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"36633d1ed17604f2749573d87eef04b2abeac2ccdb5fc47da1147a0690dc3fd6\"" Jul 2 00:23:18.699671 containerd[2082]: time="2024-07-02T00:23:18.699629024Z" level=info msg="StartContainer for \"36633d1ed17604f2749573d87eef04b2abeac2ccdb5fc47da1147a0690dc3fd6\"" Jul 2 00:23:18.782220 containerd[2082]: time="2024-07-02T00:23:18.782095152Z" level=info msg="StartContainer for \"36633d1ed17604f2749573d87eef04b2abeac2ccdb5fc47da1147a0690dc3fd6\" returns successfully" Jul 2 00:23:19.008196 kubelet[2576]: E0702 00:23:19.008156 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:19.432064 kubelet[2576]: I0702 00:23:19.431127 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.293729935 podCreationTimestamp="2024-07-02 00:23:09 +0000 UTC" firstStartedPulling="2024-07-02 00:23:10.468593724 +0000 UTC m=+48.827795493" lastFinishedPulling="2024-07-02 00:23:18.605944739 +0000 UTC m=+56.965146518" observedRunningTime="2024-07-02 00:23:19.430348363 +0000 UTC m=+57.789550155" watchObservedRunningTime="2024-07-02 00:23:19.43108096 +0000 UTC m=+57.790282751" Jul 2 00:23:20.008650 kubelet[2576]: E0702 00:23:20.008593 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:21.009122 kubelet[2576]: E0702 00:23:21.009066 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:21.949159 kubelet[2576]: E0702 00:23:21.949105 2576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:21.980197 containerd[2082]: time="2024-07-02T00:23:21.980150602Z" level=info msg="StopPodSandbox for \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\"" Jul 2 00:23:22.011412 kubelet[2576]: E0702 00:23:22.011333 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.092 [WARNING][4342] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"86df1bea-06e9-4eb3-9a03-c4d5b6430f31", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db", Pod:"nginx-deployment-6d5f899847-rwhhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3c2847fc1a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.092 [INFO][4342] k8s.go 608: Cleaning up netns ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.092 [INFO][4342] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" iface="eth0" netns="" Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.092 [INFO][4342] k8s.go 615: Releasing IP address(es) ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.092 [INFO][4342] utils.go 188: Calico CNI releasing IP address ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.142 [INFO][4348] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.142 [INFO][4348] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.142 [INFO][4348] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.150 [WARNING][4348] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.150 [INFO][4348] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.153 [INFO][4348] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:22.156535 containerd[2082]: 2024-07-02 00:23:22.154 [INFO][4342] k8s.go 621: Teardown processing complete. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:23:22.157209 containerd[2082]: time="2024-07-02T00:23:22.156546638Z" level=info msg="TearDown network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\" successfully" Jul 2 00:23:22.157209 containerd[2082]: time="2024-07-02T00:23:22.156581057Z" level=info msg="StopPodSandbox for \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\" returns successfully" Jul 2 00:23:22.165747 containerd[2082]: time="2024-07-02T00:23:22.165668574Z" level=info msg="RemovePodSandbox for \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\"" Jul 2 00:23:22.172403 containerd[2082]: time="2024-07-02T00:23:22.172338104Z" level=info msg="Forcibly stopping sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\"" Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.229 [WARNING][4368] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"86df1bea-06e9-4eb3-9a03-c4d5b6430f31", ResourceVersion:"1163", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"40a2589bf1c7a1805b974aede12623eaf80d4e5d111b1c06ee732724812339db", Pod:"nginx-deployment-6d5f899847-rwhhv", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali3c2847fc1a1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.230 [INFO][4368] k8s.go 608: Cleaning up netns ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.230 [INFO][4368] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" iface="eth0" netns="" Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.230 [INFO][4368] k8s.go 615: Releasing IP address(es) ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.230 [INFO][4368] utils.go 188: Calico CNI releasing IP address ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.258 [INFO][4374] ipam_plugin.go 411: Releasing address using handleID ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.258 [INFO][4374] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.258 [INFO][4374] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.270 [WARNING][4374] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.270 [INFO][4374] ipam_plugin.go 439: Releasing address using workloadID ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" HandleID="k8s-pod-network.1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Workload="172.31.26.26-k8s-nginx--deployment--6d5f899847--rwhhv-eth0" Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.273 [INFO][4374] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:22.276937 containerd[2082]: 2024-07-02 00:23:22.274 [INFO][4368] k8s.go 621: Teardown processing complete. ContainerID="1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f" Jul 2 00:23:22.279473 containerd[2082]: time="2024-07-02T00:23:22.277944834Z" level=info msg="TearDown network for sandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\" successfully" Jul 2 00:23:22.293919 containerd[2082]: time="2024-07-02T00:23:22.293856516Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:23:22.294069 containerd[2082]: time="2024-07-02T00:23:22.293941215Z" level=info msg="RemovePodSandbox \"1f148381021ce19cb5c29259227e0d60506e8299078393fb717e5985deaf309f\" returns successfully" Jul 2 00:23:22.295654 containerd[2082]: time="2024-07-02T00:23:22.294465715Z" level=info msg="StopPodSandbox for \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\"" Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.345 [WARNING][4392] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-csi--node--driver--x5dxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b4fb344-60af-4260-b6c2-41ad84a8e2e0", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d", Pod:"csi-node-driver-x5dxg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1456a048a03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.346 [INFO][4392] k8s.go 608: Cleaning up netns ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.346 [INFO][4392] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" iface="eth0" netns="" Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.346 [INFO][4392] k8s.go 615: Releasing IP address(es) ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.346 [INFO][4392] utils.go 188: Calico CNI releasing IP address ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.383 [INFO][4398] ipam_plugin.go 411: Releasing address using handleID ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.383 [INFO][4398] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.383 [INFO][4398] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.392 [WARNING][4398] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.393 [INFO][4398] ipam_plugin.go 439: Releasing address using workloadID ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.394 [INFO][4398] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:22.397352 containerd[2082]: 2024-07-02 00:23:22.395 [INFO][4392] k8s.go 621: Teardown processing complete. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:23:22.398236 containerd[2082]: time="2024-07-02T00:23:22.397389362Z" level=info msg="TearDown network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\" successfully" Jul 2 00:23:22.398236 containerd[2082]: time="2024-07-02T00:23:22.397419513Z" level=info msg="StopPodSandbox for \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\" returns successfully" Jul 2 00:23:22.398236 containerd[2082]: time="2024-07-02T00:23:22.397987440Z" level=info msg="RemovePodSandbox for \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\"" Jul 2 00:23:22.398236 containerd[2082]: time="2024-07-02T00:23:22.398022529Z" level=info msg="Forcibly stopping sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\"" Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.485 [WARNING][4416] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-csi--node--driver--x5dxg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5b4fb344-60af-4260-b6c2-41ad84a8e2e0", ResourceVersion:"1183", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 22, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"2a933fbfa66287b847ceffb0d03cafb43141ecab6124ee490e14d8708c8c643d", Pod:"csi-node-driver-x5dxg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali1456a048a03", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.485 [INFO][4416] k8s.go 608: Cleaning up netns ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.485 [INFO][4416] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" iface="eth0" netns="" Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.485 [INFO][4416] k8s.go 615: Releasing IP address(es) ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.485 [INFO][4416] utils.go 188: Calico CNI releasing IP address ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.512 [INFO][4422] ipam_plugin.go 411: Releasing address using handleID ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.512 [INFO][4422] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.512 [INFO][4422] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.530 [WARNING][4422] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.530 [INFO][4422] ipam_plugin.go 439: Releasing address using workloadID ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" HandleID="k8s-pod-network.464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Workload="172.31.26.26-k8s-csi--node--driver--x5dxg-eth0" Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.533 [INFO][4422] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:22.536217 containerd[2082]: 2024-07-02 00:23:22.534 [INFO][4416] k8s.go 621: Teardown processing complete. ContainerID="464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356" Jul 2 00:23:22.536217 containerd[2082]: time="2024-07-02T00:23:22.536149250Z" level=info msg="TearDown network for sandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\" successfully" Jul 2 00:23:22.565456 containerd[2082]: time="2024-07-02T00:23:22.565391615Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 2 00:23:22.565762 containerd[2082]: time="2024-07-02T00:23:22.565463899Z" level=info msg="RemovePodSandbox \"464617bfa05b8927e166057340fd26b59a7c938394597ee62a377deac6beb356\" returns successfully" Jul 2 00:23:23.011869 kubelet[2576]: E0702 00:23:23.011817 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:24.012634 kubelet[2576]: E0702 00:23:24.012588 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:25.013510 kubelet[2576]: E0702 00:23:25.013441 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:26.014145 kubelet[2576]: E0702 00:23:26.014090 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:27.014691 kubelet[2576]: E0702 00:23:27.014643 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:28.015994 kubelet[2576]: E0702 00:23:28.015913 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:29.017044 kubelet[2576]: E0702 00:23:29.016985 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:30.018163 kubelet[2576]: E0702 00:23:30.018109 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:31.019259 kubelet[2576]: E0702 00:23:31.019180 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:32.019840 kubelet[2576]: E0702 00:23:32.019677 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:33.020928 kubelet[2576]: E0702 00:23:33.020870 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:34.022524 kubelet[2576]: E0702 00:23:34.022385 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:35.022807 kubelet[2576]: E0702 00:23:35.022731 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:36.023729 kubelet[2576]: E0702 00:23:36.023674 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:37.023875 kubelet[2576]: E0702 00:23:37.023820 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:38.024573 kubelet[2576]: E0702 00:23:38.024459 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:39.024935 kubelet[2576]: E0702 00:23:39.024854 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:40.026233 kubelet[2576]: E0702 00:23:40.026055 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:41.026620 kubelet[2576]: E0702 00:23:41.026570 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:41.948369 kubelet[2576]: E0702 00:23:41.948282 2576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:42.026869 kubelet[2576]: E0702 00:23:42.026816 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:43.027052 kubelet[2576]: E0702 00:23:43.026998 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:43.701214 kubelet[2576]: I0702 00:23:43.701169 2576 topology_manager.go:215] "Topology Admit Handler" podUID="7faec58a-9dca-41ee-a4fd-dbdf63de3410" podNamespace="default" podName="test-pod-1" Jul 2 00:23:43.817230 kubelet[2576]: I0702 00:23:43.817192 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plcl5\" (UniqueName: \"kubernetes.io/projected/7faec58a-9dca-41ee-a4fd-dbdf63de3410-kube-api-access-plcl5\") pod \"test-pod-1\" (UID: \"7faec58a-9dca-41ee-a4fd-dbdf63de3410\") " pod="default/test-pod-1" Jul 2 00:23:43.817468 kubelet[2576]: I0702 00:23:43.817308 2576 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-10184d8f-615f-4f41-bc37-dda03edae6a9\" (UniqueName: \"kubernetes.io/nfs/7faec58a-9dca-41ee-a4fd-dbdf63de3410-pvc-10184d8f-615f-4f41-bc37-dda03edae6a9\") pod \"test-pod-1\" (UID: \"7faec58a-9dca-41ee-a4fd-dbdf63de3410\") " pod="default/test-pod-1" Jul 2 00:23:43.977793 kernel: FS-Cache: Loaded Jul 2 00:23:44.028507 kubelet[2576]: E0702 00:23:44.028365 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:44.130849 kernel: RPC: Registered named UNIX socket transport module. Jul 2 00:23:44.130976 kernel: RPC: Registered udp transport module. Jul 2 00:23:44.131005 kernel: RPC: Registered tcp transport module. Jul 2 00:23:44.131032 kernel: RPC: Registered tcp-with-tls transport module. Jul 2 00:23:44.132171 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 2 00:23:44.676970 kernel: NFS: Registering the id_resolver key type Jul 2 00:23:44.677251 kernel: Key type id_resolver registered Jul 2 00:23:44.678188 kernel: Key type id_legacy registered Jul 2 00:23:44.781853 nfsidmap[4477]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 2 00:23:44.807751 nfsidmap[4478]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Jul 2 00:23:44.908824 containerd[2082]: time="2024-07-02T00:23:44.908775748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7faec58a-9dca-41ee-a4fd-dbdf63de3410,Namespace:default,Attempt:0,}" Jul 2 00:23:45.030016 kubelet[2576]: E0702 00:23:45.029397 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:45.162110 (udev-worker)[4466]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:23:45.162543 systemd-networkd[1655]: cali5ec59c6bf6e: Link UP Jul 2 00:23:45.163078 systemd-networkd[1655]: cali5ec59c6bf6e: Gained carrier Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.035 [INFO][4485] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.26.26-k8s-test--pod--1-eth0 default 7faec58a-9dca-41ee-a4fd-dbdf63de3410 1310 0 2024-07-02 00:23:11 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 172.31.26.26 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.26-k8s-test--pod--1-" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.035 [INFO][4485] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.26-k8s-test--pod--1-eth0" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.090 [INFO][4491] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" HandleID="k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Workload="172.31.26.26-k8s-test--pod--1-eth0" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.107 [INFO][4491] ipam_plugin.go 264: Auto assigning IP ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" HandleID="k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Workload="172.31.26.26-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dfca0), Attrs:map[string]string{"namespace":"default", "node":"172.31.26.26", "pod":"test-pod-1", "timestamp":"2024-07-02 00:23:45.090476532 +0000 UTC"}, Hostname:"172.31.26.26", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.107 [INFO][4491] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.107 [INFO][4491] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.107 [INFO][4491] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.26.26' Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.110 [INFO][4491] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.118 [INFO][4491] ipam.go 372: Looking up existing affinities for host host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.126 [INFO][4491] ipam.go 489: Trying affinity for 192.168.38.192/26 host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.128 [INFO][4491] ipam.go 155: Attempting to load block cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.132 [INFO][4491] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.38.192/26 host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.132 [INFO][4491] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.38.192/26 handle="k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.134 [INFO][4491] ipam.go 1685: Creating new handle: k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.143 [INFO][4491] ipam.go 1203: Writing block in order to claim IPs block=192.168.38.192/26 handle="k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.153 [INFO][4491] ipam.go 1216: Successfully claimed IPs: [192.168.38.197/26] block=192.168.38.192/26 handle="k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.153 [INFO][4491] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.38.197/26] handle="k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" host="172.31.26.26" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.154 [INFO][4491] ipam_plugin.go 373: Released host-wide IPAM lock. Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.154 [INFO][4491] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.38.197/26] IPv6=[] ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" HandleID="k8s-pod-network.15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Workload="172.31.26.26-k8s-test--pod--1-eth0" Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.156 [INFO][4485] k8s.go 386: Populated endpoint ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.26-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7faec58a-9dca-41ee-a4fd-dbdf63de3410", ResourceVersion:"1310", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:45.196271 containerd[2082]: 2024-07-02 00:23:45.157 [INFO][4485] k8s.go 387: Calico CNI using IPs: [192.168.38.197/32] ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.26-k8s-test--pod--1-eth0" Jul 2 00:23:45.200300 containerd[2082]: 2024-07-02 00:23:45.157 [INFO][4485] dataplane_linux.go 68: Setting the host side veth name to cali5ec59c6bf6e ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.26-k8s-test--pod--1-eth0" Jul 2 00:23:45.200300 containerd[2082]: 2024-07-02 00:23:45.160 [INFO][4485] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.26-k8s-test--pod--1-eth0" Jul 2 00:23:45.200300 containerd[2082]: 2024-07-02 00:23:45.161 [INFO][4485] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.26-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.26.26-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"7faec58a-9dca-41ee-a4fd-dbdf63de3410", ResourceVersion:"1310", Generation:0, CreationTimestamp:time.Date(2024, time.July, 2, 0, 23, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.26.26", ContainerID:"15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.38.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"0a:0c:f6:40:7f:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jul 2 00:23:45.200300 containerd[2082]: 2024-07-02 00:23:45.193 [INFO][4485] k8s.go 500: Wrote updated endpoint to datastore ContainerID="15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.26.26-k8s-test--pod--1-eth0" Jul 2 00:23:45.255473 containerd[2082]: time="2024-07-02T00:23:45.255299190Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:23:45.255473 containerd[2082]: time="2024-07-02T00:23:45.255372190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:45.256579 containerd[2082]: time="2024-07-02T00:23:45.255399054Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:23:45.256579 containerd[2082]: time="2024-07-02T00:23:45.255492588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:23:45.383401 containerd[2082]: time="2024-07-02T00:23:45.383344612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:7faec58a-9dca-41ee-a4fd-dbdf63de3410,Namespace:default,Attempt:0,} returns sandbox id \"15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e\"" Jul 2 00:23:45.390067 containerd[2082]: time="2024-07-02T00:23:45.390028477Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 2 00:23:45.767570 containerd[2082]: time="2024-07-02T00:23:45.767435570Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 00:23:45.774505 containerd[2082]: time="2024-07-02T00:23:45.772718503Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 2 00:23:45.780611 containerd[2082]: time="2024-07-02T00:23:45.780558614Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:bf28ef5d86aca0cd30a8ef19032ccadc1eada35dc9f14f42f3ccb73974f013de\", size \"70999878\" in 390.484979ms" Jul 2 00:23:45.781205 containerd[2082]: time="2024-07-02T00:23:45.781044834Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:a1bda1bb6f7f0fd17a3ae397f26593ab0aa8e8b92e3e8a9903f99fdb26afea17\"" Jul 2 00:23:45.795518 containerd[2082]: time="2024-07-02T00:23:45.795435391Z" level=info msg="CreateContainer within sandbox \"15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 2 00:23:45.875353 containerd[2082]: time="2024-07-02T00:23:45.875302798Z" level=info msg="CreateContainer within sandbox \"15bcfa41231da2f5b38df8017fd566bd3f70bdf6b3e7c987492677e2221d109e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"63292994f7d773e40d2e23b6d66b2ebb6bbf0dcfb0ffa739efb24396ba93ffa9\"" Jul 2 00:23:45.878856 containerd[2082]: time="2024-07-02T00:23:45.877556577Z" level=info msg="StartContainer for \"63292994f7d773e40d2e23b6d66b2ebb6bbf0dcfb0ffa739efb24396ba93ffa9\"" Jul 2 00:23:45.978350 containerd[2082]: time="2024-07-02T00:23:45.978302822Z" level=info msg="StartContainer for \"63292994f7d773e40d2e23b6d66b2ebb6bbf0dcfb0ffa739efb24396ba93ffa9\" returns successfully" Jul 2 00:23:46.036073 kubelet[2576]: E0702 00:23:46.031894 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:46.358758 kubelet[2576]: I0702 00:23:46.358608 2576 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 2 00:23:46.542511 kubelet[2576]: I0702 00:23:46.540856 2576 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=35.147800412 podCreationTimestamp="2024-07-02 00:23:11 +0000 UTC" firstStartedPulling="2024-07-02 00:23:45.389429072 +0000 UTC m=+83.748630841" lastFinishedPulling="2024-07-02 00:23:45.782434719 +0000 UTC m=+84.141636498" observedRunningTime="2024-07-02 00:23:46.539531239 +0000 UTC m=+84.898733028" watchObservedRunningTime="2024-07-02 00:23:46.540806069 +0000 UTC m=+84.900007857" Jul 2 00:23:47.049736 kubelet[2576]: E0702 00:23:47.049680 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:47.119905 systemd-networkd[1655]: cali5ec59c6bf6e: Gained IPv6LL Jul 2 00:23:48.050532 kubelet[2576]: E0702 00:23:48.050464 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:49.051284 kubelet[2576]: E0702 00:23:49.051184 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:49.796802 ntpd[2038]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:23:49.797248 ntpd[2038]: 2 Jul 00:23:49 ntpd[2038]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123 Jul 2 00:23:50.052419 kubelet[2576]: E0702 00:23:50.052287 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:51.053172 kubelet[2576]: E0702 00:23:51.053123 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:52.053579 kubelet[2576]: E0702 00:23:52.053527 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:53.054451 kubelet[2576]: E0702 00:23:53.054399 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:54.054951 kubelet[2576]: E0702 00:23:54.054900 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:55.055393 kubelet[2576]: E0702 00:23:55.055343 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:56.056423 kubelet[2576]: E0702 00:23:56.056367 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:57.056891 kubelet[2576]: E0702 00:23:57.056840 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:58.058023 kubelet[2576]: E0702 00:23:58.057972 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:23:59.058438 kubelet[2576]: E0702 00:23:59.058381 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:00.058963 kubelet[2576]: E0702 00:24:00.058901 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:01.059366 kubelet[2576]: E0702 00:24:01.059312 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:01.948524 kubelet[2576]: E0702 00:24:01.948465 2576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:02.059677 kubelet[2576]: E0702 00:24:02.059626 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:03.059816 kubelet[2576]: E0702 00:24:03.059760 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:04.060977 kubelet[2576]: E0702 00:24:04.060923 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:05.062149 kubelet[2576]: E0702 00:24:05.062102 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:05.135531 kubelet[2576]: E0702 00:24:05.135469 2576 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:24:06.062341 kubelet[2576]: E0702 00:24:06.062265 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:07.063382 kubelet[2576]: E0702 00:24:07.063333 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:08.063580 kubelet[2576]: E0702 00:24:08.063509 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:09.064229 kubelet[2576]: E0702 00:24:09.064173 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:10.065279 kubelet[2576]: E0702 00:24:10.065224 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:11.065418 kubelet[2576]: E0702 00:24:11.065367 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:12.066001 kubelet[2576]: E0702 00:24:12.065949 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:13.066142 kubelet[2576]: E0702 00:24:13.066094 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:14.067085 kubelet[2576]: E0702 00:24:14.067034 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:15.067723 kubelet[2576]: E0702 00:24:15.067671 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:15.136330 kubelet[2576]: E0702 00:24:15.136245 2576 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:24:16.068654 kubelet[2576]: E0702 00:24:16.068599 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:17.069422 kubelet[2576]: E0702 00:24:17.069371 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:18.070453 kubelet[2576]: E0702 00:24:18.070402 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:19.070615 kubelet[2576]: E0702 00:24:19.070577 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:19.096453 systemd[1]: run-containerd-runc-k8s.io-a9e44d4a2a1921c7ee3c8f95a2b099e4ed547d49528e34a1c3e6b3931dff795c-runc.mpQNMt.mount: Deactivated successfully. Jul 2 00:24:20.072066 kubelet[2576]: E0702 00:24:20.072009 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:21.072262 kubelet[2576]: E0702 00:24:21.072203 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:21.948622 kubelet[2576]: E0702 00:24:21.948573 2576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:22.073308 kubelet[2576]: E0702 00:24:22.073254 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:23.075946 kubelet[2576]: E0702 00:24:23.075873 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:24.076638 kubelet[2576]: E0702 00:24:24.076577 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:25.076962 kubelet[2576]: E0702 00:24:25.076909 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:25.137062 kubelet[2576]: E0702 00:24:25.136992 2576 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:24:26.077462 kubelet[2576]: E0702 00:24:26.077404 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:27.077905 kubelet[2576]: E0702 00:24:27.077848 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:28.078687 kubelet[2576]: E0702 00:24:28.078637 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:29.079244 kubelet[2576]: E0702 00:24:29.079190 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:30.079470 kubelet[2576]: E0702 00:24:30.079413 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:31.080548 kubelet[2576]: E0702 00:24:31.080502 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:32.081463 kubelet[2576]: E0702 00:24:32.081411 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:33.082432 kubelet[2576]: E0702 00:24:33.082376 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:34.082548 kubelet[2576]: E0702 00:24:34.082462 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:35.083390 kubelet[2576]: E0702 00:24:35.083330 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:35.137946 kubelet[2576]: E0702 00:24:35.137902 2576 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:24:35.904549 kubelet[2576]: E0702 00:24:35.904517 2576 controller.go:193] "Failed to update lease" err="Put \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": unexpected EOF" Jul 2 00:24:35.904705 kubelet[2576]: I0702 00:24:35.904563 2576 controller.go:116] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jul 2 00:24:36.084619 kubelet[2576]: E0702 00:24:36.084581 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:36.506128 kubelet[2576]: E0702 00:24:36.506085 2576 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.26.26\": Get \"https://172.31.27.179:6443/api/v1/nodes/172.31.26.26?resourceVersion=0&timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" Jul 2 00:24:36.506771 kubelet[2576]: E0702 00:24:36.506586 2576 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.26.26\": Get \"https://172.31.27.179:6443/api/v1/nodes/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" Jul 2 00:24:36.506985 kubelet[2576]: E0702 00:24:36.506963 2576 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.26.26\": Get \"https://172.31.27.179:6443/api/v1/nodes/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" Jul 2 00:24:36.507497 kubelet[2576]: E0702 00:24:36.507464 2576 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.26.26\": Get \"https://172.31.27.179:6443/api/v1/nodes/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" Jul 2 00:24:36.508229 kubelet[2576]: E0702 00:24:36.508185 2576 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.26.26\": Get \"https://172.31.27.179:6443/api/v1/nodes/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" Jul 2 00:24:36.508229 kubelet[2576]: E0702 00:24:36.508211 2576 kubelet_node_status.go:527] "Unable to update node status" err="update node status exceeds retry count" Jul 2 00:24:36.922290 kubelet[2576]: E0702 00:24:36.922026 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused - error from a previous attempt: read tcp 172.31.26.26:53296->172.31.27.179:6443: read: connection reset by peer" interval="200ms" Jul 2 00:24:37.085629 kubelet[2576]: E0702 00:24:37.085578 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:37.124733 kubelet[2576]: E0702 00:24:37.124691 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" interval="400ms" Jul 2 00:24:37.527846 kubelet[2576]: E0702 00:24:37.527807 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" interval="800ms" Jul 2 00:24:38.089139 kubelet[2576]: E0702 00:24:38.089073 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:38.331937 kubelet[2576]: E0702 00:24:38.331896 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" interval="1.6s" Jul 2 00:24:39.089763 kubelet[2576]: E0702 00:24:39.089690 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:39.933377 kubelet[2576]: E0702 00:24:39.933341 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": dial tcp 172.31.27.179:6443: connect: connection refused" interval="3.2s" Jul 2 00:24:40.090327 kubelet[2576]: E0702 00:24:40.090275 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:41.090964 kubelet[2576]: E0702 00:24:41.090906 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:41.948649 kubelet[2576]: E0702 00:24:41.948597 2576 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:42.091935 kubelet[2576]: E0702 00:24:42.091883 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:43.092164 kubelet[2576]: E0702 00:24:43.092115 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:44.092652 kubelet[2576]: E0702 00:24:44.092605 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:45.093655 kubelet[2576]: E0702 00:24:45.093598 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:46.094054 kubelet[2576]: E0702 00:24:46.093999 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:47.094697 kubelet[2576]: E0702 00:24:47.094648 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:48.095544 kubelet[2576]: E0702 00:24:48.095478 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:49.096172 kubelet[2576]: E0702 00:24:49.096120 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:50.096404 kubelet[2576]: E0702 00:24:50.096330 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:51.096501 kubelet[2576]: E0702 00:24:51.096457 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:52.097132 kubelet[2576]: E0702 00:24:52.096806 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:53.097529 kubelet[2576]: E0702 00:24:53.097382 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:53.134740 kubelet[2576]: E0702 00:24:53.134695 2576 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.179:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.26.26?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Jul 2 00:24:54.097693 kubelet[2576]: E0702 00:24:54.097647 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:55.098035 kubelet[2576]: E0702 00:24:55.097983 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:56.098684 kubelet[2576]: E0702 00:24:56.098397 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 2 00:24:56.516340 kubelet[2576]: E0702 00:24:56.516206 2576 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"172.31.26.26\": Get \"https://172.31.27.179:6443/api/v1/nodes/172.31.26.26?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jul 2 00:24:57.098945 kubelet[2576]: E0702 00:24:57.098897 2576 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"