Feb 13 15:43:16.056810 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 14:00:20 -00 2025
Feb 13 15:43:16.056852 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65
Feb 13 15:43:16.056869 kernel: BIOS-provided physical RAM map:
Feb 13 15:43:16.056880 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Feb 13 15:43:16.056891 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Feb 13 15:43:16.056903 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Feb 13 15:43:16.056920 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable
Feb 13 15:43:16.056932 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved
Feb 13 15:43:16.056944 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved
Feb 13 15:43:16.056956 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Feb 13 15:43:16.056968 kernel: NX (Execute Disable) protection: active
Feb 13 15:43:16.056980 kernel: APIC: Static calls initialized
Feb 13 15:43:16.056992 kernel: SMBIOS 2.7 present.
Feb 13 15:43:16.057015 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017
Feb 13 15:43:16.057034 kernel: Hypervisor detected: KVM
Feb 13 15:43:16.057047 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Feb 13 15:43:16.057061 kernel: kvm-clock: using sched offset of 8779211961 cycles
Feb 13 15:43:16.057075 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Feb 13 15:43:16.057089 kernel: tsc: Detected 2500.004 MHz processor
Feb 13 15:43:16.057103 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Feb 13 15:43:16.057117 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Feb 13 15:43:16.057132 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000
Feb 13 15:43:16.057145 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Feb 13 15:43:16.057160 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Feb 13 15:43:16.057174 kernel: Using GB pages for direct mapping
Feb 13 15:43:16.057188 kernel: ACPI: Early table checksum verification disabled
Feb 13 15:43:16.057203 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON)
Feb 13 15:43:16.057216 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001)
Feb 13 15:43:16.057231 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 13 15:43:16.057244 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001)
Feb 13 15:43:16.057262 kernel: ACPI: FACS 0x000000007D9EFF40 000040
Feb 13 15:43:16.057277 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 13 15:43:16.057289 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 13 15:43:16.057301 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001)
Feb 13 15:43:16.057313 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 13 15:43:16.057325 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001)
Feb 13 15:43:16.057338 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001)
Feb 13 15:43:16.057352 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001)
Feb 13 15:43:16.057365 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3]
Feb 13 15:43:16.057382 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488]
Feb 13 15:43:16.057399 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f]
Feb 13 15:43:16.057412 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39]
Feb 13 15:43:16.057425 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645]
Feb 13 15:43:16.057438 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf]
Feb 13 15:43:16.057454 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b]
Feb 13 15:43:16.057468 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7]
Feb 13 15:43:16.057482 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037]
Feb 13 15:43:16.057495 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba]
Feb 13 15:43:16.057509 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0
Feb 13 15:43:16.057522 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0
Feb 13 15:43:16.057536 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff]
Feb 13 15:43:16.057550 kernel: NUMA: Initialized distance table, cnt=1
Feb 13 15:43:16.057564 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff]
Feb 13 15:43:16.057580 kernel: Zone ranges:
Feb 13 15:43:16.057594 kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Feb 13 15:43:16.057608 kernel:   DMA32    [mem 0x0000000001000000-0x000000007d9e9fff]
Feb 13 15:43:16.057622 kernel:   Normal   empty
Feb 13 15:43:16.057637 kernel: Movable zone start for each node
Feb 13 15:43:16.057651 kernel: Early memory node ranges
Feb 13 15:43:16.057665 kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Feb 13 15:43:16.057679 kernel:   node   0: [mem 0x0000000000100000-0x000000007d9e9fff]
Feb 13 15:43:16.057693 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff]
Feb 13 15:43:16.057708 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Feb 13 15:43:16.057725 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Feb 13 15:43:16.057739 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges
Feb 13 15:43:16.057754 kernel: ACPI: PM-Timer IO Port: 0xb008
Feb 13 15:43:16.057768 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Feb 13 15:43:16.057783 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23
Feb 13 15:43:16.057797 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Feb 13 15:43:16.057811 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Feb 13 15:43:16.057825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Feb 13 15:43:16.057840 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Feb 13 15:43:16.057857 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Feb 13 15:43:16.057871 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000
Feb 13 15:43:16.057886 kernel: TSC deadline timer available
Feb 13 15:43:16.057900 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs
Feb 13 15:43:16.058321 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Feb 13 15:43:16.058341 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices
Feb 13 15:43:16.058356 kernel: Booting paravirtualized kernel on KVM
Feb 13 15:43:16.058371 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Feb 13 15:43:16.058386 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1
Feb 13 15:43:16.058408 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576
Feb 13 15:43:16.058422 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152
Feb 13 15:43:16.058437 kernel: pcpu-alloc: [0] 0 1 
Feb 13 15:43:16.058451 kernel: kvm-guest: PV spinlocks enabled
Feb 13 15:43:16.058585 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear)
Feb 13 15:43:16.058607 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65
Feb 13 15:43:16.058623 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 15:43:16.058637 kernel: random: crng init done
Feb 13 15:43:16.058656 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 15:43:16.058671 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Feb 13 15:43:16.058685 kernel: Fallback order for Node 0: 0 
Feb 13 15:43:16.058700 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 506242
Feb 13 15:43:16.058715 kernel: Policy zone: DMA32
Feb 13 15:43:16.058731 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 15:43:16.058746 kernel: Memory: 1930300K/2057760K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 127200K reserved, 0K cma-reserved)
Feb 13 15:43:16.058762 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 15:43:16.058776 kernel: Kernel/User page tables isolation: enabled
Feb 13 15:43:16.058793 kernel: ftrace: allocating 37893 entries in 149 pages
Feb 13 15:43:16.058852 kernel: ftrace: allocated 149 pages with 4 groups
Feb 13 15:43:16.058867 kernel: Dynamic Preempt: voluntary
Feb 13 15:43:16.058882 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 15:43:16.058897 kernel: rcu:         RCU event tracing is enabled.
Feb 13 15:43:16.058971 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 15:43:16.058987 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 15:43:16.059013 kernel:         Rude variant of Tasks RCU enabled.
Feb 13 15:43:16.059028 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 15:43:16.059047 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 15:43:16.059061 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 15:43:16.059075 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16
Feb 13 15:43:16.059090 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 15:43:16.059105 kernel: Console: colour VGA+ 80x25
Feb 13 15:43:16.059120 kernel: printk: console [ttyS0] enabled
Feb 13 15:43:16.059134 kernel: ACPI: Core revision 20230628
Feb 13 15:43:16.059149 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns
Feb 13 15:43:16.059162 kernel: APIC: Switch to symmetric I/O mode setup
Feb 13 15:43:16.059179 kernel: x2apic enabled
Feb 13 15:43:16.059193 kernel: APIC: Switched APIC routing to: physical x2apic
Feb 13 15:43:16.059219 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns
Feb 13 15:43:16.059236 kernel: Calibrating delay loop (skipped) preset value.. 5000.00 BogoMIPS (lpj=2500004)
Feb 13 15:43:16.059251 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8
Feb 13 15:43:16.059266 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4
Feb 13 15:43:16.059280 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Feb 13 15:43:16.059294 kernel: Spectre V2 : Mitigation: Retpolines
Feb 13 15:43:16.059308 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch
Feb 13 15:43:16.059323 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT
Feb 13 15:43:16.059338 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
Feb 13 15:43:16.059352 kernel: RETBleed: Vulnerable
Feb 13 15:43:16.059367 kernel: Speculative Store Bypass: Vulnerable
Feb 13 15:43:16.059384 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:43:16.059398 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode
Feb 13 15:43:16.059412 kernel: GDS: Unknown: Dependent on hypervisor status
Feb 13 15:43:16.059427 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Feb 13 15:43:16.059441 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Feb 13 15:43:16.059456 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Feb 13 15:43:16.059473 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers'
Feb 13 15:43:16.059487 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR'
Feb 13 15:43:16.059501 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask'
Feb 13 15:43:16.059516 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256'
Feb 13 15:43:16.059530 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256'
Feb 13 15:43:16.059544 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers'
Feb 13 15:43:16.059559 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Feb 13 15:43:16.059573 kernel: x86/fpu: xstate_offset[3]:  832, xstate_sizes[3]:   64
Feb 13 15:43:16.059587 kernel: x86/fpu: xstate_offset[4]:  896, xstate_sizes[4]:   64
Feb 13 15:43:16.059601 kernel: x86/fpu: xstate_offset[5]:  960, xstate_sizes[5]:   64
Feb 13 15:43:16.059615 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]:  512
Feb 13 15:43:16.059632 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024
Feb 13 15:43:16.059647 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]:    8
Feb 13 15:43:16.059661 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format.
Feb 13 15:43:16.059676 kernel: Freeing SMP alternatives memory: 32K
Feb 13 15:43:16.059690 kernel: pid_max: default: 32768 minimum: 301
Feb 13 15:43:16.059704 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 15:43:16.059719 kernel: landlock: Up and running.
Feb 13 15:43:16.059733 kernel: SELinux:  Initializing.
Feb 13 15:43:16.059748 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 13 15:43:16.059762 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear)
Feb 13 15:43:16.060288 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7)
Feb 13 15:43:16.060337 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:43:16.060355 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:43:16.060371 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 15:43:16.060386 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only.
Feb 13 15:43:16.060402 kernel: signal: max sigframe size: 3632
Feb 13 15:43:16.060417 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 15:43:16.060433 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 15:43:16.060447 kernel: NMI watchdog: Perf NMI watchdog permanently disabled
Feb 13 15:43:16.060462 kernel: smp: Bringing up secondary CPUs ...
Feb 13 15:43:16.060481 kernel: smpboot: x86: Booting SMP configuration:
Feb 13 15:43:16.060496 kernel: .... node  #0, CPUs:      #1
Feb 13 15:43:16.060513 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
Feb 13 15:43:16.060529 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
Feb 13 15:43:16.060544 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 15:43:16.060559 kernel: smpboot: Max logical packages: 1
Feb 13 15:43:16.060574 kernel: smpboot: Total of 2 processors activated (10000.01 BogoMIPS)
Feb 13 15:43:16.060589 kernel: devtmpfs: initialized
Feb 13 15:43:16.060604 kernel: x86/mm: Memory block size: 128MB
Feb 13 15:43:16.060622 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 15:43:16.060637 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 15:43:16.060653 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 15:43:16.060668 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 15:43:16.060683 kernel: audit: initializing netlink subsys (disabled)
Feb 13 15:43:16.060698 kernel: audit: type=2000 audit(1739461394.861:1): state=initialized audit_enabled=0 res=1
Feb 13 15:43:16.060713 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 15:43:16.060728 kernel: thermal_sys: Registered thermal governor 'user_space'
Feb 13 15:43:16.060743 kernel: cpuidle: using governor menu
Feb 13 15:43:16.060761 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 15:43:16.060776 kernel: dca service started, version 1.12.1
Feb 13 15:43:16.060791 kernel: PCI: Using configuration type 1 for base access
Feb 13 15:43:16.060807 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Feb 13 15:43:16.060823 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 15:43:16.060838 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 15:43:16.060854 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 15:43:16.060869 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 15:43:16.060887 kernel: ACPI: Added _OSI(Module Device)
Feb 13 15:43:16.060903 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 15:43:16.060918 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 15:43:16.060933 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 15:43:16.060949 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded
Feb 13 15:43:16.060964 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Feb 13 15:43:16.060979 kernel: ACPI: Interpreter enabled
Feb 13 15:43:16.060994 kernel: ACPI: PM: (supports S0 S5)
Feb 13 15:43:16.061029 kernel: ACPI: Using IOAPIC for interrupt routing
Feb 13 15:43:16.061044 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Feb 13 15:43:16.061062 kernel: PCI: Using E820 reservations for host bridge windows
Feb 13 15:43:16.061076 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F
Feb 13 15:43:16.061091 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Feb 13 15:43:16.061358 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 15:43:16.061495 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI]
Feb 13 15:43:16.061623 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
Feb 13 15:43:16.061641 kernel: acpiphp: Slot [3] registered
Feb 13 15:43:16.061660 kernel: acpiphp: Slot [4] registered
Feb 13 15:43:16.061682 kernel: acpiphp: Slot [5] registered
Feb 13 15:43:16.061696 kernel: acpiphp: Slot [6] registered
Feb 13 15:43:16.061711 kernel: acpiphp: Slot [7] registered
Feb 13 15:43:16.061725 kernel: acpiphp: Slot [8] registered
Feb 13 15:43:16.061740 kernel: acpiphp: Slot [9] registered
Feb 13 15:43:16.061754 kernel: acpiphp: Slot [10] registered
Feb 13 15:43:16.061769 kernel: acpiphp: Slot [11] registered
Feb 13 15:43:16.061784 kernel: acpiphp: Slot [12] registered
Feb 13 15:43:16.061803 kernel: acpiphp: Slot [13] registered
Feb 13 15:43:16.061818 kernel: acpiphp: Slot [14] registered
Feb 13 15:43:16.061832 kernel: acpiphp: Slot [15] registered
Feb 13 15:43:16.061847 kernel: acpiphp: Slot [16] registered
Feb 13 15:43:16.061862 kernel: acpiphp: Slot [17] registered
Feb 13 15:43:16.061877 kernel: acpiphp: Slot [18] registered
Feb 13 15:43:16.061891 kernel: acpiphp: Slot [19] registered
Feb 13 15:43:16.061906 kernel: acpiphp: Slot [20] registered
Feb 13 15:43:16.061921 kernel: acpiphp: Slot [21] registered
Feb 13 15:43:16.061935 kernel: acpiphp: Slot [22] registered
Feb 13 15:43:16.061953 kernel: acpiphp: Slot [23] registered
Feb 13 15:43:16.061968 kernel: acpiphp: Slot [24] registered
Feb 13 15:43:16.061982 kernel: acpiphp: Slot [25] registered
Feb 13 15:43:16.061997 kernel: acpiphp: Slot [26] registered
Feb 13 15:43:16.062023 kernel: acpiphp: Slot [27] registered
Feb 13 15:43:16.062039 kernel: acpiphp: Slot [28] registered
Feb 13 15:43:16.062053 kernel: acpiphp: Slot [29] registered
Feb 13 15:43:16.062068 kernel: acpiphp: Slot [30] registered
Feb 13 15:43:16.062083 kernel: acpiphp: Slot [31] registered
Feb 13 15:43:16.062101 kernel: PCI host bridge to bus 0000:00
Feb 13 15:43:16.062263 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Feb 13 15:43:16.062395 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Feb 13 15:43:16.062511 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Feb 13 15:43:16.062625 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window]
Feb 13 15:43:16.070538 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Feb 13 15:43:16.070756 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000
Feb 13 15:43:16.070927 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100
Feb 13 15:43:16.071091 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000
Feb 13 15:43:16.071233 kernel: pci 0000:00:01.3: quirk: [io  0xb000-0xb03f] claimed by PIIX4 ACPI
Feb 13 15:43:16.071369 kernel: pci 0000:00:01.3: quirk: [io  0xb100-0xb10f] claimed by PIIX4 SMB
Feb 13 15:43:16.071505 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff
Feb 13 15:43:16.072371 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff
Feb 13 15:43:16.072533 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff
Feb 13 15:43:16.072682 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff
Feb 13 15:43:16.072821 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff
Feb 13 15:43:16.072956 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff
Feb 13 15:43:16.081486 kernel: pci 0000:00:01.3: quirk_piix4_acpi+0x0/0x180 took 10742 usecs
Feb 13 15:43:16.081735 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000
Feb 13 15:43:16.081897 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref]
Feb 13 15:43:16.082221 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref]
Feb 13 15:43:16.082423 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Feb 13 15:43:16.082578 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 13 15:43:16.082716 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff]
Feb 13 15:43:16.082960 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 13 15:43:16.083393 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff]
Feb 13 15:43:16.083423 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Feb 13 15:43:16.083448 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Feb 13 15:43:16.083464 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Feb 13 15:43:16.083480 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Feb 13 15:43:16.083492 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Feb 13 15:43:16.083507 kernel: iommu: Default domain type: Translated
Feb 13 15:43:16.083523 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Feb 13 15:43:16.083538 kernel: PCI: Using ACPI for IRQ routing
Feb 13 15:43:16.083553 kernel: PCI: pci_cache_line_size set to 64 bytes
Feb 13 15:43:16.083633 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Feb 13 15:43:16.083652 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff]
Feb 13 15:43:16.083825 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device
Feb 13 15:43:16.086192 kernel: pci 0000:00:03.0: vgaarb: bridge control possible
Feb 13 15:43:16.086470 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Feb 13 15:43:16.086494 kernel: vgaarb: loaded
Feb 13 15:43:16.086510 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
Feb 13 15:43:16.086526 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter
Feb 13 15:43:16.086541 kernel: clocksource: Switched to clocksource kvm-clock
Feb 13 15:43:16.086557 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 15:43:16.086581 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 15:43:16.086596 kernel: pnp: PnP ACPI init
Feb 13 15:43:16.086611 kernel: pnp: PnP ACPI: found 5 devices
Feb 13 15:43:16.086626 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Feb 13 15:43:16.086641 kernel: NET: Registered PF_INET protocol family
Feb 13 15:43:16.086656 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 15:43:16.086671 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear)
Feb 13 15:43:16.086686 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 15:43:16.086701 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear)
Feb 13 15:43:16.086720 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear)
Feb 13 15:43:16.086734 kernel: TCP: Hash tables configured (established 16384 bind 16384)
Feb 13 15:43:16.086757 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 13 15:43:16.086778 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear)
Feb 13 15:43:16.086793 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 15:43:16.086807 kernel: NET: Registered PF_XDP protocol family
Feb 13 15:43:16.086957 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Feb 13 15:43:16.087804 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Feb 13 15:43:16.087948 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Feb 13 15:43:16.088084 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window]
Feb 13 15:43:16.088225 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Feb 13 15:43:16.088244 kernel: PCI: CLS 0 bytes, default 64
Feb 13 15:43:16.088260 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer
Feb 13 15:43:16.088275 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093d6e846, max_idle_ns: 440795249997 ns
Feb 13 15:43:16.088289 kernel: clocksource: Switched to clocksource tsc
Feb 13 15:43:16.088303 kernel: Initialise system trusted keyrings
Feb 13 15:43:16.088321 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0
Feb 13 15:43:16.088336 kernel: Key type asymmetric registered
Feb 13 15:43:16.088349 kernel: Asymmetric key parser 'x509' registered
Feb 13 15:43:16.088364 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251)
Feb 13 15:43:16.088378 kernel: io scheduler mq-deadline registered
Feb 13 15:43:16.088392 kernel: io scheduler kyber registered
Feb 13 15:43:16.088406 kernel: io scheduler bfq registered
Feb 13 15:43:16.088420 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00
Feb 13 15:43:16.088435 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 15:43:16.088452 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Feb 13 15:43:16.088466 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Feb 13 15:43:16.088480 kernel: i8042: Warning: Keylock active
Feb 13 15:43:16.088493 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Feb 13 15:43:16.088507 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Feb 13 15:43:16.088648 kernel: rtc_cmos 00:00: RTC can wake from S4
Feb 13 15:43:16.088774 kernel: rtc_cmos 00:00: registered as rtc0
Feb 13 15:43:16.088895 kernel: rtc_cmos 00:00: setting system clock to 2025-02-13T15:43:15 UTC (1739461395)
Feb 13 15:43:16.089039 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram
Feb 13 15:43:16.089058 kernel: intel_pstate: CPU model not supported
Feb 13 15:43:16.089072 kernel: NET: Registered PF_INET6 protocol family
Feb 13 15:43:16.089086 kernel: Segment Routing with IPv6
Feb 13 15:43:16.089100 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 15:43:16.089114 kernel: NET: Registered PF_PACKET protocol family
Feb 13 15:43:16.089128 kernel: Key type dns_resolver registered
Feb 13 15:43:16.089141 kernel: IPI shorthand broadcast: enabled
Feb 13 15:43:16.089155 kernel: sched_clock: Marking stable (747002061, 246328745)->(1120098985, -126768179)
Feb 13 15:43:16.089175 kernel: registered taskstats version 1
Feb 13 15:43:16.089191 kernel: Loading compiled-in X.509 certificates
Feb 13 15:43:16.089206 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: a260c8876205efb4ca2ab3eb040cd310ec7afd21'
Feb 13 15:43:16.089219 kernel: Key type .fscrypt registered
Feb 13 15:43:16.089233 kernel: Key type fscrypt-provisioning registered
Feb 13 15:43:16.089247 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 15:43:16.089261 kernel: ima: Allocated hash algorithm: sha1
Feb 13 15:43:16.089275 kernel: ima: No architecture policies found
Feb 13 15:43:16.089289 kernel: clk: Disabling unused clocks
Feb 13 15:43:16.089307 kernel: Freeing unused kernel image (initmem) memory: 43476K
Feb 13 15:43:16.089321 kernel: Write protecting the kernel read-only data: 38912k
Feb 13 15:43:16.089335 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K
Feb 13 15:43:16.089350 kernel: Run /init as init process
Feb 13 15:43:16.089364 kernel:   with arguments:
Feb 13 15:43:16.089378 kernel:     /init
Feb 13 15:43:16.089392 kernel:   with environment:
Feb 13 15:43:16.089406 kernel:     HOME=/
Feb 13 15:43:16.089419 kernel:     TERM=linux
Feb 13 15:43:16.089436 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 15:43:16.089476 systemd[1]: Successfully made /usr/ read-only.
Feb 13 15:43:16.089495 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE)
Feb 13 15:43:16.089511 systemd[1]: Detected virtualization amazon.
Feb 13 15:43:16.089526 systemd[1]: Detected architecture x86-64.
Feb 13 15:43:16.089540 systemd[1]: Running in initrd.
Feb 13 15:43:16.089555 systemd[1]: No hostname configured, using default hostname.
Feb 13 15:43:16.089574 systemd[1]: Hostname set to <localhost>.
Feb 13 15:43:16.089588 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:43:16.089604 systemd[1]: Queued start job for default target initrd.target.
Feb 13 15:43:16.089620 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:43:16.089637 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:43:16.089654 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 15:43:16.089668 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:43:16.089759 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 15:43:16.089783 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 15:43:16.089801 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 15:43:16.089816 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 15:43:16.090077 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:43:16.090100 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:43:16.090117 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:43:16.090133 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:43:16.090153 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:43:16.090176 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:43:16.090192 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:43:16.090212 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:43:16.090227 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 15:43:16.090243 systemd[1]: Listening on systemd-journald.socket - Journal Sockets.
Feb 13 15:43:16.090259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:43:16.090274 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:43:16.090345 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:43:16.090364 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:43:16.090415 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 15:43:16.090434 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:43:16.090449 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 15:43:16.090469 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 15:43:16.090487 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:43:16.090502 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:43:16.090518 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:43:16.090533 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 15:43:16.090549 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:43:16.090569 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 15:43:16.090586 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 15:43:16.090601 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0
Feb 13 15:43:16.090653 systemd-journald[179]: Collecting audit messages is disabled.
Feb 13 15:43:16.090693 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 15:43:16.090709 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 15:43:16.090725 kernel: Bridge firewalling registered
Feb 13 15:43:16.090741 systemd-journald[179]: Journal started
Feb 13 15:43:16.090773 systemd-journald[179]: Runtime Journal (/run/log/journal/ec2b7d8b4608231e04c4b32fb1bfa905) is 4.8M, max 38.5M, 33.7M free.
Feb 13 15:43:16.012166 systemd-modules-load[180]: Inserted module 'overlay'
Feb 13 15:43:16.257613 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:43:16.086975 systemd-modules-load[180]: Inserted module 'br_netfilter'
Feb 13 15:43:16.258208 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:43:16.265545 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:43:16.280383 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:43:16.285230 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:43:16.288285 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:43:16.291776 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:43:16.330710 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:43:16.342130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:43:16.357048 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:43:16.376638 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 15:43:16.381921 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:43:16.401292 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:43:16.420386 dracut-cmdline[214]: dracut-dracut-053
Feb 13 15:43:16.438303 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=f6a3351ed39d61c0cb6d1964ad84b777665fb0b2f253a15f9696d9c5fba26f65
Feb 13 15:43:16.496994 systemd-resolved[216]: Positive Trust Anchors:
Feb 13 15:43:16.497059 systemd-resolved[216]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:43:16.497120 systemd-resolved[216]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:43:16.502414 systemd-resolved[216]: Defaulting to hostname 'linux'.
Feb 13 15:43:16.517984 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:43:16.521345 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:43:16.579043 kernel: SCSI subsystem initialized
Feb 13 15:43:16.609053 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 15:43:16.642038 kernel: iscsi: registered transport (tcp)
Feb 13 15:43:16.673047 kernel: iscsi: registered transport (qla4xxx)
Feb 13 15:43:16.673124 kernel: QLogic iSCSI HBA Driver
Feb 13 15:43:16.756337 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:43:16.768338 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 15:43:16.802321 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 15:43:16.802402 kernel: device-mapper: uevent: version 1.0.3
Feb 13 15:43:16.802425 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 15:43:16.863062 kernel: raid6: avx512x4 gen()  9744 MB/s
Feb 13 15:43:16.880155 kernel: raid6: avx512x2 gen()  6999 MB/s
Feb 13 15:43:16.898104 kernel: raid6: avx512x1 gen()  9073 MB/s
Feb 13 15:43:16.917154 kernel: raid6: avx2x4   gen()  9590 MB/s
Feb 13 15:43:16.934197 kernel: raid6: avx2x2   gen()  6732 MB/s
Feb 13 15:43:16.951291 kernel: raid6: avx2x1   gen()  6744 MB/s
Feb 13 15:43:16.951370 kernel: raid6: using algorithm avx512x4 gen() 9744 MB/s
Feb 13 15:43:16.969058 kernel: raid6: .... xor() 4190 MB/s, rmw enabled
Feb 13 15:43:16.969146 kernel: raid6: using avx512x2 recovery algorithm
Feb 13 15:43:16.996035 kernel: xor: automatically using best checksumming function   avx       
Feb 13 15:43:17.254264 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 15:43:17.271867 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:43:17.278384 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:43:17.319415 systemd-udevd[399]: Using default interface naming scheme 'v255'.
Feb 13 15:43:17.336974 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:43:17.358301 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 15:43:17.381824 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation
Feb 13 15:43:17.439326 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:43:17.447273 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:43:17.532404 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:43:17.544375 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 15:43:17.575057 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:43:17.580061 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:43:17.581973 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:43:17.585516 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:43:17.595697 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 15:43:17.627906 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:43:17.648513 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 13 15:43:17.674499 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 13 15:43:17.674693 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy.
Feb 13 15:43:17.674865 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:2f:81:bd:81:af
Feb 13 15:43:17.683105 kernel: cryptd: max_cpu_qlen set to 1000
Feb 13 15:43:17.686190 (udev-worker)[449]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:43:17.696444 kernel: AVX2 version of gcm_enc/dec engaged.
Feb 13 15:43:17.696553 kernel: AES CTR mode by8 optimization enabled
Feb 13 15:43:17.697471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:43:17.697648 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:43:17.700705 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:43:17.706709 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:43:17.711297 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 13 15:43:17.711548 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Feb 13 15:43:17.706939 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:43:17.712732 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:43:17.724836 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:43:17.727741 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully.
Feb 13 15:43:17.734217 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 13 15:43:17.744078 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 15:43:17.744146 kernel: GPT:9289727 != 16777215
Feb 13 15:43:17.744166 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 15:43:17.744184 kernel: GPT:9289727 != 16777215
Feb 13 15:43:17.744201 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 15:43:17.744219 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:43:17.859052 kernel: BTRFS: device fsid 506754f7-5ef1-4c63-ad2a-b7b855a48f85 devid 1 transid 40 /dev/nvme0n1p3 scanned by (udev-worker) (449)
Feb 13 15:43:17.868863 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (462)
Feb 13 15:43:17.879490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:43:17.889228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 15:43:17.937277 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:43:17.994130 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Feb 13 15:43:18.006867 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Feb 13 15:43:18.018978 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Feb 13 15:43:18.020666 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Feb 13 15:43:18.036569 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 15:43:18.045198 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 15:43:18.062446 disk-uuid[633]: Primary Header is updated.
Feb 13 15:43:18.062446 disk-uuid[633]: Secondary Entries is updated.
Feb 13 15:43:18.062446 disk-uuid[633]: Secondary Header is updated.
Feb 13 15:43:18.077802 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:43:19.095041 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 15:43:19.096655 disk-uuid[634]: The operation has completed successfully.
Feb 13 15:43:19.319737 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 15:43:19.320073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 15:43:19.382332 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 15:43:19.389801 sh[894]: Success
Feb 13 15:43:19.417030 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2"
Feb 13 15:43:19.566865 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 15:43:19.579145 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 15:43:19.586639 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 15:43:19.638309 kernel: BTRFS info (device dm-0): first mount of filesystem 506754f7-5ef1-4c63-ad2a-b7b855a48f85
Feb 13 15:43:19.638405 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:43:19.638427 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 15:43:19.640136 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 15:43:19.640263 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 15:43:19.673043 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 15:43:19.678482 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 15:43:19.683766 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 15:43:19.697774 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 15:43:19.710024 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 15:43:19.759239 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773
Feb 13 15:43:19.759323 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:43:19.759345 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:43:19.766040 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:43:19.784167 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773
Feb 13 15:43:19.784469 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 15:43:19.795570 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 15:43:19.812251 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 15:43:19.910948 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:43:19.921466 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:43:19.965729 systemd-networkd[1087]: lo: Link UP
Feb 13 15:43:19.965743 systemd-networkd[1087]: lo: Gained carrier
Feb 13 15:43:19.968147 systemd-networkd[1087]: Enumeration completed
Feb 13 15:43:19.968285 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:43:19.969720 systemd[1]: Reached target network.target - Network.
Feb 13 15:43:19.970703 systemd-networkd[1087]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:43:19.970709 systemd-networkd[1087]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:43:19.981739 systemd-networkd[1087]: eth0: Link UP
Feb 13 15:43:19.981749 systemd-networkd[1087]: eth0: Gained carrier
Feb 13 15:43:19.981765 systemd-networkd[1087]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:43:19.998746 systemd-networkd[1087]: eth0: DHCPv4 address 172.31.30.13/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 15:43:20.149925 ignition[1011]: Ignition 2.20.0
Feb 13 15:43:20.149937 ignition[1011]: Stage: fetch-offline
Feb 13 15:43:20.150267 ignition[1011]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:43:20.153594 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:43:20.150436 ignition[1011]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:43:20.150857 ignition[1011]: Ignition finished successfully
Feb 13 15:43:20.162340 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 15:43:20.182423 ignition[1097]: Ignition 2.20.0
Feb 13 15:43:20.182436 ignition[1097]: Stage: fetch
Feb 13 15:43:20.182868 ignition[1097]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:43:20.182881 ignition[1097]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:43:20.183511 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:43:20.208226 ignition[1097]: PUT result: OK
Feb 13 15:43:20.211349 ignition[1097]: parsed url from cmdline: ""
Feb 13 15:43:20.211456 ignition[1097]: no config URL provided
Feb 13 15:43:20.211521 ignition[1097]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 15:43:20.211538 ignition[1097]: no config at "/usr/lib/ignition/user.ign"
Feb 13 15:43:20.211619 ignition[1097]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:43:20.215947 ignition[1097]: PUT result: OK
Feb 13 15:43:20.216077 ignition[1097]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 13 15:43:20.218475 ignition[1097]: GET result: OK
Feb 13 15:43:20.218568 ignition[1097]: parsing config with SHA512: 9ce821fda1990a93cdb68424f5a00892141e37bc6e5b29b2d47c219a10cbbdc51bd5be1105021cc482b766ee868b987e87db016e759436c1d0fff48df6abecfb
Feb 13 15:43:20.228422 unknown[1097]: fetched base config from "system"
Feb 13 15:43:20.228442 unknown[1097]: fetched base config from "system"
Feb 13 15:43:20.229478 ignition[1097]: fetch: fetch complete
Feb 13 15:43:20.228452 unknown[1097]: fetched user config from "aws"
Feb 13 15:43:20.229486 ignition[1097]: fetch: fetch passed
Feb 13 15:43:20.235769 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 15:43:20.229564 ignition[1097]: Ignition finished successfully
Feb 13 15:43:20.245384 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 15:43:20.281610 ignition[1104]: Ignition 2.20.0
Feb 13 15:43:20.281621 ignition[1104]: Stage: kargs
Feb 13 15:43:20.282026 ignition[1104]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:43:20.282037 ignition[1104]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:43:20.282121 ignition[1104]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:43:20.284732 ignition[1104]: PUT result: OK
Feb 13 15:43:20.293058 ignition[1104]: kargs: kargs passed
Feb 13 15:43:20.293145 ignition[1104]: Ignition finished successfully
Feb 13 15:43:20.296733 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 15:43:20.303357 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 15:43:20.360869 ignition[1111]: Ignition 2.20.0
Feb 13 15:43:20.360884 ignition[1111]: Stage: disks
Feb 13 15:43:20.362048 ignition[1111]: no configs at "/usr/lib/ignition/base.d"
Feb 13 15:43:20.362063 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:43:20.362687 ignition[1111]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:43:20.366898 ignition[1111]: PUT result: OK
Feb 13 15:43:20.371035 ignition[1111]: disks: disks passed
Feb 13 15:43:20.371100 ignition[1111]: Ignition finished successfully
Feb 13 15:43:20.374544 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 15:43:20.376709 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 15:43:20.378712 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 15:43:20.381601 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:43:20.383542 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:43:20.389062 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:43:20.396241 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 15:43:20.431567 systemd-fsck[1119]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 15:43:20.437052 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 15:43:20.642185 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 15:43:20.834022 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 8023eced-1511-4e72-a58a-db1b8cb3210e r/w with ordered data mode. Quota mode: none.
Feb 13 15:43:20.834969 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 15:43:20.838327 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:43:20.853158 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:43:20.857835 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 15:43:20.861330 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 15:43:20.864069 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 15:43:20.865179 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:43:20.875268 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 15:43:20.885143 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1138)
Feb 13 15:43:20.887527 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 15:43:20.892809 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773
Feb 13 15:43:20.892843 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:43:20.892863 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:43:20.913137 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:43:20.918578 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:43:21.178457 initrd-setup-root[1163]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 15:43:21.187412 initrd-setup-root[1170]: cut: /sysroot/etc/group: No such file or directory
Feb 13 15:43:21.195405 initrd-setup-root[1177]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 15:43:21.202391 initrd-setup-root[1184]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 15:43:21.480184 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 15:43:21.492190 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 15:43:21.498267 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 15:43:21.510036 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773
Feb 13 15:43:21.541521 ignition[1251]: INFO     : Ignition 2.20.0
Feb 13 15:43:21.541521 ignition[1251]: INFO     : Stage: mount
Feb 13 15:43:21.545368 ignition[1251]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:43:21.545368 ignition[1251]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:43:21.545368 ignition[1251]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:43:21.550075 ignition[1251]: INFO     : PUT result: OK
Feb 13 15:43:21.554946 ignition[1251]: INFO     : mount: mount passed
Feb 13 15:43:21.556997 ignition[1251]: INFO     : Ignition finished successfully
Feb 13 15:43:21.558053 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 15:43:21.569246 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 15:43:21.574940 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 15:43:21.637347 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 15:43:21.650319 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 15:43:21.707037 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1264)
Feb 13 15:43:21.712252 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 666795ea-1390-4b1f-8cde-ea877eeb5773
Feb 13 15:43:21.712644 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm
Feb 13 15:43:21.720954 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 15:43:21.740119 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 15:43:21.748590 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 15:43:21.805360 ignition[1281]: INFO     : Ignition 2.20.0
Feb 13 15:43:21.805360 ignition[1281]: INFO     : Stage: files
Feb 13 15:43:21.809327 ignition[1281]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:43:21.809327 ignition[1281]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:43:21.809327 ignition[1281]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:43:21.816017 ignition[1281]: INFO     : PUT result: OK
Feb 13 15:43:21.819705 ignition[1281]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 15:43:21.830620 ignition[1281]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 15:43:21.830620 ignition[1281]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 15:43:21.834867 ignition[1281]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 15:43:21.839117 ignition[1281]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 15:43:21.841334 unknown[1281]: wrote ssh authorized keys file for user: core
Feb 13 15:43:21.843421 ignition[1281]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 15:43:21.845262 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 15:43:21.845262 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 15:43:21.849585 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:43:21.849585 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 15:43:21.849585 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Feb 13 15:43:21.849585 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Feb 13 15:43:21.849585 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Feb 13 15:43:21.849585 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1
Feb 13 15:43:21.960185 systemd-networkd[1087]: eth0: Gained IPv6LL
Feb 13 15:43:22.302309 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb 13 15:43:22.782910 ignition[1281]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw"
Feb 13 15:43:22.785451 ignition[1281]: INFO     : files: createResultFile: createFiles: op(7): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:43:22.785451 ignition[1281]: INFO     : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 15:43:22.785451 ignition[1281]: INFO     : files: files passed
Feb 13 15:43:22.785451 ignition[1281]: INFO     : Ignition finished successfully
Feb 13 15:43:22.793472 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 15:43:22.797224 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 15:43:22.807376 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 15:43:22.817746 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 15:43:22.817852 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 15:43:22.849431 initrd-setup-root-after-ignition[1309]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:43:22.849431 initrd-setup-root-after-ignition[1309]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:43:22.854942 initrd-setup-root-after-ignition[1313]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 15:43:22.859131 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:43:22.863453 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 15:43:22.875248 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 15:43:22.937859 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 15:43:22.937997 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 15:43:22.941942 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 15:43:22.947084 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 15:43:22.953291 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 15:43:22.964397 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 15:43:22.981461 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:43:23.002406 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 15:43:23.040013 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:43:23.049945 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:43:23.051480 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 15:43:23.053085 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 15:43:23.054412 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 15:43:23.060451 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 15:43:23.061840 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 15:43:23.066808 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 15:43:23.070022 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 15:43:23.076812 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 15:43:23.082251 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 15:43:23.085626 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 15:43:23.087209 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 15:43:23.090025 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 15:43:23.103695 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 15:43:23.109034 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 15:43:23.109313 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 15:43:23.111403 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:43:23.112815 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:43:23.115056 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 15:43:23.116541 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:43:23.119329 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 15:43:23.125783 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 15:43:23.151746 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 15:43:23.151967 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 15:43:23.162807 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 15:43:23.163032 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 15:43:23.171280 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 15:43:23.172357 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 15:43:23.172597 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:43:23.178434 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 15:43:23.180450 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 15:43:23.180719 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:43:23.183229 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 15:43:23.184164 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 15:43:23.195829 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 15:43:23.195990 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 15:43:23.212210 ignition[1333]: INFO     : Ignition 2.20.0
Feb 13 15:43:23.214230 ignition[1333]: INFO     : Stage: umount
Feb 13 15:43:23.216225 ignition[1333]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 15:43:23.216225 ignition[1333]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 15:43:23.216225 ignition[1333]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 15:43:23.221167 ignition[1333]: INFO     : PUT result: OK
Feb 13 15:43:23.224284 ignition[1333]: INFO     : umount: umount passed
Feb 13 15:43:23.225331 ignition[1333]: INFO     : Ignition finished successfully
Feb 13 15:43:23.225953 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 15:43:23.227131 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 15:43:23.227258 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 15:43:23.229467 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 15:43:23.229867 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 15:43:23.232246 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 15:43:23.232321 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 15:43:23.234313 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 15:43:23.234379 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 15:43:23.236347 systemd[1]: Stopped target network.target - Network.
Feb 13 15:43:23.238244 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 15:43:23.238337 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 15:43:23.240479 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 15:43:23.242347 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 15:43:23.248494 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:43:23.248912 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 15:43:23.249214 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 15:43:23.249426 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 15:43:23.249478 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 15:43:23.249574 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 15:43:23.249613 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 15:43:23.249731 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 15:43:23.249786 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 15:43:23.249914 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 15:43:23.249953 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 15:43:23.250419 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 15:43:23.250780 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 15:43:23.264422 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 15:43:23.264580 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 15:43:23.272611 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully.
Feb 13 15:43:23.272993 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 15:43:23.273319 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 15:43:23.278524 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully.
Feb 13 15:43:23.279928 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 15:43:23.280408 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:43:23.288033 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 15:43:23.290679 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 15:43:23.290771 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 15:43:23.294992 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 15:43:23.295092 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:43:23.297951 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 15:43:23.298051 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:43:23.300737 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 15:43:23.300788 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:43:23.312676 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:43:23.321519 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Feb 13 15:43:23.321634 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Feb 13 15:43:23.337131 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 15:43:23.337889 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:43:23.344620 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 15:43:23.344707 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:43:23.348147 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 15:43:23.348212 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:43:23.350539 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 15:43:23.350636 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 15:43:23.356780 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 15:43:23.356849 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 15:43:23.362888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 15:43:23.362959 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 15:43:23.382305 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 15:43:23.383543 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 15:43:23.383652 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:43:23.385081 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 15:43:23.385234 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:43:23.393999 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Feb 13 15:43:23.394094 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully.
Feb 13 15:43:23.394633 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 15:43:23.396772 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 15:43:23.400929 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 15:43:23.401164 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 15:43:23.416529 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 15:43:23.416679 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 15:43:23.429779 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 15:43:23.429916 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 15:43:23.435968 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 15:43:23.442518 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 15:43:23.464676 systemd[1]: Switching root.
Feb 13 15:43:23.502285 systemd-journald[179]: Journal stopped
Feb 13 15:43:25.725231 systemd-journald[179]: Received SIGTERM from PID 1 (systemd).
Feb 13 15:43:25.725323 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 15:43:25.725347 kernel: SELinux:  policy capability open_perms=1
Feb 13 15:43:25.725366 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 15:43:25.725386 kernel: SELinux:  policy capability always_check_network=0
Feb 13 15:43:25.725406 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 15:43:25.725431 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 15:43:25.732196 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 15:43:25.732254 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 15:43:25.732275 kernel: audit: type=1403 audit(1739461403.802:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 15:43:25.732297 systemd[1]: Successfully loaded SELinux policy in 61.147ms.
Feb 13 15:43:25.732329 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 20.538ms.
Feb 13 15:43:25.732740 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE)
Feb 13 15:43:25.732787 systemd[1]: Detected virtualization amazon.
Feb 13 15:43:25.732807 systemd[1]: Detected architecture x86-64.
Feb 13 15:43:25.732826 systemd[1]: Detected first boot.
Feb 13 15:43:25.732849 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 15:43:25.732871 zram_generator::config[1378]: No configuration found.
Feb 13 15:43:25.732895 kernel: Guest personality initialized and is inactive
Feb 13 15:43:25.732922 kernel: VMCI host device registered (name=vmci, major=10, minor=125)
Feb 13 15:43:25.732940 kernel: Initialized host personality
Feb 13 15:43:25.732960 kernel: NET: Registered PF_VSOCK protocol family
Feb 13 15:43:25.733726 systemd[1]: Populated /etc with preset unit settings.
Feb 13 15:43:25.733767 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully.
Feb 13 15:43:25.733787 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 15:43:25.733807 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 15:43:25.733836 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:43:25.733856 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 15:43:25.733877 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 15:43:25.733898 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 15:43:25.733924 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 15:43:25.735500 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 15:43:25.735529 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 15:43:25.735556 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 15:43:25.735574 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 15:43:25.735593 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 15:43:25.735613 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 15:43:25.735632 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 15:43:25.735656 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 15:43:25.735675 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 15:43:25.735694 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 15:43:25.735714 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 15:43:25.735733 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 15:43:25.735751 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 15:43:25.735769 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 15:43:25.735787 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 15:43:25.735810 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 15:43:25.735828 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 15:43:25.735847 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 15:43:25.735865 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 15:43:25.735883 systemd[1]: Reached target swap.target - Swaps.
Feb 13 15:43:25.735901 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 15:43:25.735919 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 15:43:25.735938 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption.
Feb 13 15:43:25.735956 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 15:43:25.735978 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 15:43:25.735997 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 15:43:25.736028 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 15:43:25.736049 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 15:43:25.736068 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 15:43:25.736087 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 15:43:25.736105 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:43:25.736124 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 15:43:25.736143 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 15:43:25.736166 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 15:43:25.736185 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 15:43:25.736203 systemd[1]: Reached target machines.target - Containers.
Feb 13 15:43:25.736222 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 15:43:25.736243 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:43:25.736262 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 15:43:25.736282 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 15:43:25.736303 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:43:25.736328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:43:25.736347 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:43:25.736368 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 15:43:25.736389 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:43:25.736411 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 15:43:25.736433 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 15:43:25.736456 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 15:43:25.736476 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 15:43:25.736497 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 15:43:25.736524 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Feb 13 15:43:25.736546 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 15:43:25.736566 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 15:43:25.736584 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 15:43:25.736605 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 15:43:25.736627 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials...
Feb 13 15:43:25.736649 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 15:43:25.738069 kernel: fuse: init (API version 7.39)
Feb 13 15:43:25.738111 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 15:43:25.738140 systemd[1]: Stopped verity-setup.service.
Feb 13 15:43:25.738160 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:43:25.738189 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 15:43:25.738206 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 15:43:25.738229 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 15:43:25.738247 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 15:43:25.738266 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 15:43:25.738287 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 15:43:25.738306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 15:43:25.738327 kernel: loop: module loaded
Feb 13 15:43:25.738344 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 15:43:25.738363 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 15:43:25.738381 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:43:25.738400 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:43:25.738418 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:43:25.738437 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:43:25.738504 systemd-journald[1458]: Collecting audit messages is disabled.
Feb 13 15:43:25.738544 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 15:43:25.738563 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 15:43:25.738581 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:43:25.738600 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:43:25.738619 systemd-journald[1458]: Journal started
Feb 13 15:43:25.738655 systemd-journald[1458]: Runtime Journal (/run/log/journal/ec2b7d8b4608231e04c4b32fb1bfa905) is 4.8M, max 38.5M, 33.7M free.
Feb 13 15:43:25.142704 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 15:43:25.787615 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 15:43:25.151713 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Feb 13 15:43:25.152702 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 15:43:25.748733 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 15:43:25.751060 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 15:43:25.760232 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 15:43:25.787733 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 15:43:25.792173 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 15:43:25.795276 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:43:25.804028 kernel: ACPI: bus type drm_connector registered
Feb 13 15:43:25.815690 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 15:43:25.823286 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:43:25.824250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:43:25.831120 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 15:43:25.839797 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 15:43:25.851371 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 15:43:25.867375 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 15:43:25.887336 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 15:43:25.887392 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 15:43:25.890843 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management.
Feb 13 15:43:25.908075 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 15:43:25.917291 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 15:43:25.918774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:43:25.923407 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 15:43:25.927281 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 15:43:25.928537 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:43:25.945431 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 15:43:25.953256 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 15:43:25.961753 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 15:43:25.966094 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials.
Feb 13 15:43:25.974509 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 15:43:25.979130 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 15:43:26.008045 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 15:43:26.019974 systemd-journald[1458]: Time spent on flushing to /var/log/journal/ec2b7d8b4608231e04c4b32fb1bfa905 is 88.329ms for 960 entries.
Feb 13 15:43:26.019974 systemd-journald[1458]: System Journal (/var/log/journal/ec2b7d8b4608231e04c4b32fb1bfa905) is 8M, max 195.6M, 187.6M free.
Feb 13 15:43:26.125801 systemd-journald[1458]: Received client request to flush runtime journal.
Feb 13 15:43:26.125861 kernel: loop0: detected capacity change from 0 to 147912
Feb 13 15:43:26.030341 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 15:43:26.041784 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 15:43:26.052250 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 15:43:26.067255 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk...
Feb 13 15:43:26.102965 udevadm[1519]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 15:43:26.132852 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 15:43:26.160885 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 15:43:26.164981 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk.
Feb 13 15:43:26.186446 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 15:43:26.198076 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 15:43:26.268419 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 15:43:26.269864 systemd-tmpfiles[1527]: ACLs are not supported, ignoring.
Feb 13 15:43:26.269893 systemd-tmpfiles[1527]: ACLs are not supported, ignoring.
Feb 13 15:43:26.281871 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 15:43:26.292041 kernel: loop1: detected capacity change from 0 to 205544
Feb 13 15:43:26.479791 kernel: loop2: detected capacity change from 0 to 138176
Feb 13 15:43:26.691595 kernel: loop3: detected capacity change from 0 to 62832
Feb 13 15:43:26.859132 kernel: loop4: detected capacity change from 0 to 147912
Feb 13 15:43:26.938046 kernel: loop5: detected capacity change from 0 to 205544
Feb 13 15:43:26.987448 kernel: loop6: detected capacity change from 0 to 138176
Feb 13 15:43:27.027106 kernel: loop7: detected capacity change from 0 to 62832
Feb 13 15:43:27.048900 (sd-merge)[1535]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Feb 13 15:43:27.050674 (sd-merge)[1535]: Merged extensions into '/usr'.
Feb 13 15:43:27.058806 systemd[1]: Reload requested from client PID 1513 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 15:43:27.059995 systemd[1]: Reloading...
Feb 13 15:43:27.259129 ldconfig[1509]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 15:43:27.262106 zram_generator::config[1566]: No configuration found.
Feb 13 15:43:27.484467 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:43:27.639127 systemd[1]: Reloading finished in 578 ms.
Feb 13 15:43:27.665563 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 15:43:27.668453 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 15:43:27.670726 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 15:43:27.687914 systemd[1]: Starting ensure-sysext.service...
Feb 13 15:43:27.692226 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 15:43:27.698251 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 15:43:27.718892 systemd[1]: Reload requested from client PID 1613 ('systemctl') (unit ensure-sysext.service)...
Feb 13 15:43:27.718912 systemd[1]: Reloading...
Feb 13 15:43:27.778771 systemd-udevd[1615]: Using default interface naming scheme 'v255'.
Feb 13 15:43:27.780349 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 15:43:27.780942 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 15:43:27.784444 systemd-tmpfiles[1614]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 15:43:27.786377 systemd-tmpfiles[1614]: ACLs are not supported, ignoring.
Feb 13 15:43:27.786479 systemd-tmpfiles[1614]: ACLs are not supported, ignoring.
Feb 13 15:43:27.798731 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:43:27.798750 systemd-tmpfiles[1614]: Skipping /boot
Feb 13 15:43:27.833965 systemd-tmpfiles[1614]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 15:43:27.833985 systemd-tmpfiles[1614]: Skipping /boot
Feb 13 15:43:27.910038 zram_generator::config[1642]: No configuration found.
Feb 13 15:43:28.111193 (udev-worker)[1650]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:43:28.178040 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255
Feb 13 15:43:28.197455 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2
Feb 13 15:43:28.197522 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1646)
Feb 13 15:43:28.197552 kernel: ACPI: button: Power Button [PWRF]
Feb 13 15:43:28.197579 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3
Feb 13 15:43:28.206054 kernel: ACPI: button: Sleep Button [SLPF]
Feb 13 15:43:28.223083 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4
Feb 13 15:43:28.320647 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:43:28.387334 kernel: mousedev: PS/2 mouse device common for all mice
Feb 13 15:43:28.522155 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Feb 13 15:43:28.522531 systemd[1]: Reloading finished in 802 ms.
Feb 13 15:43:28.542096 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 15:43:28.560832 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 15:43:28.598050 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 15:43:28.610594 systemd[1]: Finished ensure-sysext.service.
Feb 13 15:43:28.648949 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 15:43:28.655411 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:43:28.661272 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:43:28.670461 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 15:43:28.672600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 15:43:28.686288 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 15:43:28.697288 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 15:43:28.712136 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 15:43:28.734423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 15:43:28.742680 lvm[1804]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:43:28.746777 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 15:43:28.748292 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 15:43:28.756273 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 15:43:28.757721 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67).
Feb 13 15:43:28.776624 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 15:43:28.802173 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 15:43:28.816500 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 15:43:28.818334 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 15:43:28.822670 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 15:43:28.834323 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 15:43:28.837712 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen).
Feb 13 15:43:28.841135 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 15:43:28.848064 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 15:43:28.849141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 15:43:28.852500 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 15:43:28.854936 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 15:43:28.856449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 15:43:28.858347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 15:43:28.859161 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 15:43:28.859469 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 15:43:28.872587 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 15:43:28.888095 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 15:43:28.898323 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 15:43:28.899721 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 15:43:28.899874 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 15:43:28.908123 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 15:43:28.927417 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 15:43:28.952172 lvm[1830]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 15:43:28.985159 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 15:43:28.995747 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 15:43:29.056803 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 15:43:29.064847 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 15:43:29.066684 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 15:43:29.071770 augenrules[1850]: No rules
Feb 13 15:43:29.074272 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 15:43:29.078212 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:43:29.078610 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:43:29.112490 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 15:43:29.311569 systemd-resolved[1818]: Positive Trust Anchors:
Feb 13 15:43:29.311590 systemd-resolved[1818]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 15:43:29.311723 systemd-resolved[1818]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 15:43:29.314156 systemd-networkd[1817]: lo: Link UP
Feb 13 15:43:29.314168 systemd-networkd[1817]: lo: Gained carrier
Feb 13 15:43:29.316526 systemd-networkd[1817]: Enumeration completed
Feb 13 15:43:29.316666 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 15:43:29.316955 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:43:29.316961 systemd-networkd[1817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 15:43:29.320880 systemd-networkd[1817]: eth0: Link UP
Feb 13 15:43:29.321075 systemd-networkd[1817]: eth0: Gained carrier
Feb 13 15:43:29.321111 systemd-networkd[1817]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 15:43:29.324349 systemd-resolved[1818]: Defaulting to hostname 'linux'.
Feb 13 15:43:29.332154 systemd-networkd[1817]: eth0: DHCPv4 address 172.31.30.13/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 15:43:29.356707 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 15:43:29.358489 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 15:43:29.362192 systemd[1]: Reached target network.target - Network.
Feb 13 15:43:29.363670 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 15:43:29.365139 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 15:43:29.366438 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 15:43:29.369074 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 15:43:29.370793 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 15:43:29.372133 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 15:43:29.373560 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 15:43:29.374855 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 15:43:29.374890 systemd[1]: Reached target paths.target - Path Units.
Feb 13 15:43:29.375786 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 15:43:29.378045 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 15:43:29.381276 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 15:43:29.385947 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local).
Feb 13 15:43:29.387492 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK).
Feb 13 15:43:29.388866 systemd[1]: Reached target ssh-access.target - SSH Access Available.
Feb 13 15:43:29.401165 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 15:43:29.403175 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket.
Feb 13 15:43:29.413215 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd...
Feb 13 15:43:29.415958 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 15:43:29.417977 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 15:43:29.419294 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 15:43:29.423101 systemd[1]: Reached target basic.target - Basic System.
Feb 13 15:43:29.424259 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:43:29.424286 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 15:43:29.430079 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 15:43:29.435281 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 15:43:29.454272 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 15:43:29.460149 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 15:43:29.464214 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 15:43:29.466087 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 15:43:29.470423 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 15:43:29.495562 systemd[1]: Started ntpd.service - Network Time Service.
Feb 13 15:43:29.511167 systemd[1]: Starting setup-oem.service - Setup OEM...
Feb 13 15:43:29.515535 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 15:43:29.526175 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 15:43:29.541244 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 15:43:29.545446 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 15:43:29.547867 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 15:43:29.549157 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 15:43:29.554517 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 15:43:29.561225 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd.
Feb 13 15:43:29.665151 jq[1875]: false
Feb 13 15:43:29.700267 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found loop4
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found loop5
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found loop6
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found loop7
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found nvme0n1
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found nvme0n1p1
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found nvme0n1p2
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found nvme0n1p3
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found usr
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found nvme0n1p4
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found nvme0n1p6
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found nvme0n1p7
Feb 13 15:43:29.701518 extend-filesystems[1876]: Found nvme0n1p9
Feb 13 15:43:29.701518 extend-filesystems[1876]: Checking size of /dev/nvme0n1p9
Feb 13 15:43:29.793639 jq[1890]: true
Feb 13 15:43:29.700594 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 15:43:29.800376 update_engine[1888]: I20250213 15:43:29.775449  1888 main.cc:92] Flatcar Update Engine starting
Feb 13 15:43:29.749444 dbus-daemon[1874]: [system] SELinux support is enabled
Feb 13 15:43:29.703204 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 15:43:29.781639 dbus-daemon[1874]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1817 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 13 15:43:29.703506 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 15:43:29.823665 extend-filesystems[1876]: Resized partition /dev/nvme0n1p9
Feb 13 15:43:29.843178 update_engine[1888]: I20250213 15:43:29.820304  1888 update_check_scheduler.cc:74] Next update check in 8m42s
Feb 13 15:43:29.708679 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 15:43:29.715155 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 15:43:29.843562 jq[1906]: true
Feb 13 15:43:29.739623 (ntainerd)[1896]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 15:43:29.868270 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 13 15:43:29.754091 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 15:43:29.868897 extend-filesystems[1917]: resize2fs 1.47.1 (20-May-2024)
Feb 13 15:43:29.772417 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 15:43:29.772488 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 15:43:29.775726 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 15:43:29.775758 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 15:43:29.797295 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Feb 13 15:43:29.824620 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 15:43:29.842255 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 15:43:29.920512 systemd[1]: Finished setup-oem.service - Setup OEM.
Feb 13 15:43:29.951045 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 13 15:43:29.973028 extend-filesystems[1917]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 13 15:43:29.973028 extend-filesystems[1917]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 15:43:29.973028 extend-filesystems[1917]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 13 15:43:29.981850 extend-filesystems[1876]: Resized filesystem in /dev/nvme0n1p9
Feb 13 15:43:29.985938 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:23:52 UTC 2025 (1): Starting
Feb 13 15:43:29.985938 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:43:29.985938 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: ----------------------------------------------------
Feb 13 15:43:29.985938 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:43:29.985938 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:43:29.985938 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: corporation.  Support and training for ntp-4 are
Feb 13 15:43:29.985938 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: available at https://www.nwtime.org/support
Feb 13 15:43:29.985938 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: ----------------------------------------------------
Feb 13 15:43:29.980616 ntpd[1878]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 13:23:52 UTC 2025 (1): Starting
Feb 13 15:43:29.980650 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 15:43:29.980661 ntpd[1878]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 15:43:29.980950 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 15:43:29.980675 ntpd[1878]: ----------------------------------------------------
Feb 13 15:43:29.980685 ntpd[1878]: ntp-4 is maintained by Network Time Foundation,
Feb 13 15:43:29.980694 ntpd[1878]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 15:43:29.980703 ntpd[1878]: corporation.  Support and training for ntp-4 are
Feb 13 15:43:29.980713 ntpd[1878]: available at https://www.nwtime.org/support
Feb 13 15:43:29.980723 ntpd[1878]: ----------------------------------------------------
Feb 13 15:43:29.995305 ntpd[1878]: proto: precision = 0.064 usec (-24)
Feb 13 15:43:29.998518 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: proto: precision = 0.064 usec (-24)
Feb 13 15:43:29.998518 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: basedate set to 2025-02-01
Feb 13 15:43:29.998518 ntpd[1878]: 13 Feb 15:43:29 ntpd[1878]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:43:29.995670 ntpd[1878]: basedate set to 2025-02-01
Feb 13 15:43:29.995686 ntpd[1878]: gps base set to 2025-02-02 (week 2352)
Feb 13 15:43:30.002526 coreos-metadata[1873]: Feb 13 15:43:30.002 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 15:43:30.024319 coreos-metadata[1873]: Feb 13 15:43:30.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Feb 13 15:43:30.024447 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:43:30.024447 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:43:30.024447 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:43:30.024447 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: Listen normally on 3 eth0 172.31.30.13:123
Feb 13 15:43:30.024447 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: Listen normally on 4 lo [::1]:123
Feb 13 15:43:30.007135 ntpd[1878]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 15:43:30.007262 ntpd[1878]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 15:43:30.007712 ntpd[1878]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 15:43:30.007757 ntpd[1878]: Listen normally on 3 eth0 172.31.30.13:123
Feb 13 15:43:30.007799 ntpd[1878]: Listen normally on 4 lo [::1]:123
Feb 13 15:43:30.007999 ntpd[1878]: bind(21) AF_INET6 fe80::42f:81ff:febd:81af%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 15:43:30.036540 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: bind(21) AF_INET6 fe80::42f:81ff:febd:81af%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 15:43:30.036540 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: unable to create socket on eth0 (5) for fe80::42f:81ff:febd:81af%2#123
Feb 13 15:43:30.036540 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: failed to init interface for address fe80::42f:81ff:febd:81af%2
Feb 13 15:43:30.036540 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: Listening on routing socket on fd #21 for interface updates
Feb 13 15:43:30.036923 coreos-metadata[1873]: Feb 13 15:43:30.026 INFO Fetch successful
Feb 13 15:43:30.036923 coreos-metadata[1873]: Feb 13 15:43:30.027 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Feb 13 15:43:30.025568 ntpd[1878]: unable to create socket on eth0 (5) for fe80::42f:81ff:febd:81af%2#123
Feb 13 15:43:30.025587 ntpd[1878]: failed to init interface for address fe80::42f:81ff:febd:81af%2
Feb 13 15:43:30.025646 ntpd[1878]: Listening on routing socket on fd #21 for interface updates
Feb 13 15:43:30.043942 coreos-metadata[1873]: Feb 13 15:43:30.038 INFO Fetch successful
Feb 13 15:43:30.043942 coreos-metadata[1873]: Feb 13 15:43:30.038 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Feb 13 15:43:30.043942 coreos-metadata[1873]: Feb 13 15:43:30.043 INFO Fetch successful
Feb 13 15:43:30.043942 coreos-metadata[1873]: Feb 13 15:43:30.043 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Feb 13 15:43:30.044209 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (1644)
Feb 13 15:43:30.055577 coreos-metadata[1873]: Feb 13 15:43:30.050 INFO Fetch successful
Feb 13 15:43:30.055577 coreos-metadata[1873]: Feb 13 15:43:30.050 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Feb 13 15:43:30.055577 coreos-metadata[1873]: Feb 13 15:43:30.053 INFO Fetch failed with 404: resource not found
Feb 13 15:43:30.055577 coreos-metadata[1873]: Feb 13 15:43:30.053 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Feb 13 15:43:30.058036 coreos-metadata[1873]: Feb 13 15:43:30.057 INFO Fetch successful
Feb 13 15:43:30.058036 coreos-metadata[1873]: Feb 13 15:43:30.057 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Feb 13 15:43:30.058389 ntpd[1878]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:43:30.058614 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:43:30.058614 ntpd[1878]: 13 Feb 15:43:30 ntpd[1878]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:43:30.058433 ntpd[1878]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 15:43:30.063974 coreos-metadata[1873]: Feb 13 15:43:30.061 INFO Fetch successful
Feb 13 15:43:30.063974 coreos-metadata[1873]: Feb 13 15:43:30.061 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Feb 13 15:43:30.063974 coreos-metadata[1873]: Feb 13 15:43:30.062 INFO Fetch successful
Feb 13 15:43:30.063974 coreos-metadata[1873]: Feb 13 15:43:30.062 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Feb 13 15:43:30.097635 coreos-metadata[1873]: Feb 13 15:43:30.097 INFO Fetch successful
Feb 13 15:43:30.097635 coreos-metadata[1873]: Feb 13 15:43:30.097 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Feb 13 15:43:30.129242 coreos-metadata[1873]: Feb 13 15:43:30.123 INFO Fetch successful
Feb 13 15:43:30.239751 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 15:43:30.243116 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 15:43:30.280738 bash[1959]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:43:30.283315 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 15:43:30.295336 systemd[1]: Starting sshkeys.service...
Feb 13 15:43:30.409586 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 15:43:30.411741 systemd-logind[1885]: Watching system buttons on /dev/input/event1 (Power Button)
Feb 13 15:43:30.414094 systemd-logind[1885]: Watching system buttons on /dev/input/event2 (Sleep Button)
Feb 13 15:43:30.414144 systemd-logind[1885]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard)
Feb 13 15:43:30.415823 systemd-logind[1885]: New seat seat0.
Feb 13 15:43:30.419423 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 15:43:30.422401 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 15:43:30.509151 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Feb 13 15:43:30.510378 dbus-daemon[1874]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 13 15:43:30.517324 dbus-daemon[1874]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1912 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 13 15:43:30.525130 sshd_keygen[1916]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 15:43:30.531430 systemd[1]: Starting polkit.service - Authorization Manager...
Feb 13 15:43:30.578382 locksmithd[1920]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 15:43:30.648068 polkitd[2012]: Started polkitd version 121
Feb 13 15:43:30.688780 polkitd[2012]: Loading rules from directory /etc/polkit-1/rules.d
Feb 13 15:43:30.689196 polkitd[2012]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 13 15:43:30.710357 polkitd[2012]: Finished loading, compiling and executing 2 rules
Feb 13 15:43:30.721598 dbus-daemon[1874]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 13 15:43:30.721821 systemd[1]: Started polkit.service - Authorization Manager.
Feb 13 15:43:30.728939 polkitd[2012]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 13 15:43:30.742020 coreos-metadata[1992]: Feb 13 15:43:30.731 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 15:43:30.755171 coreos-metadata[1992]: Feb 13 15:43:30.746 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Feb 13 15:43:30.755171 coreos-metadata[1992]: Feb 13 15:43:30.749 INFO Fetch successful
Feb 13 15:43:30.755171 coreos-metadata[1992]: Feb 13 15:43:30.749 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 13 15:43:30.753883 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 15:43:30.756663 coreos-metadata[1992]: Feb 13 15:43:30.756 INFO Fetch successful
Feb 13 15:43:30.763306 unknown[1992]: wrote ssh authorized keys file for user: core
Feb 13 15:43:30.776165 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 15:43:30.828078 update-ssh-keys[2060]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 15:43:30.832326 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 15:43:30.846756 systemd[1]: Finished sshkeys.service.
Feb 13 15:43:30.876822 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 15:43:30.877611 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 15:43:30.893499 systemd-resolved[1818]: System hostname changed to 'ip-172-31-30-13'.
Feb 13 15:43:30.893506 systemd-hostnamed[1912]: Hostname set to <ip-172-31-30-13> (transient)
Feb 13 15:43:30.898731 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 15:43:30.921223 systemd-networkd[1817]: eth0: Gained IPv6LL
Feb 13 15:43:30.951984 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 15:43:30.972684 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 15:43:30.981532 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 15:43:30.995510 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Feb 13 15:43:31.010880 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 15:43:31.026305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:43:31.037649 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 15:43:31.044581 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 15:43:31.046391 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 15:43:31.087690 containerd[1896]: time="2025-02-13T15:43:31.086481162Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 15:43:31.125742 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 15:43:31.142987 systemd[1]: Started sshd@0-172.31.30.13:22-139.178.68.195:47226.service - OpenSSH per-connection server daemon (139.178.68.195:47226).
Feb 13 15:43:31.153118 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 15:43:31.240362 amazon-ssm-agent[2087]: Initializing new seelog logger
Feb 13 15:43:31.241943 amazon-ssm-agent[2087]: New Seelog Logger Creation Complete
Feb 13 15:43:31.241943 amazon-ssm-agent[2087]: 2025/02/13 15:43:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:43:31.241943 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:43:31.243030 amazon-ssm-agent[2087]: 2025/02/13 15:43:31 processing appconfig overrides
Feb 13 15:43:31.243030 amazon-ssm-agent[2087]: 2025/02/13 15:43:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:43:31.243030 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:43:31.243030 amazon-ssm-agent[2087]: 2025/02/13 15:43:31 processing appconfig overrides
Feb 13 15:43:31.243431 amazon-ssm-agent[2087]: 2025/02/13 15:43:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:43:31.243485 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:43:31.243914 amazon-ssm-agent[2087]: 2025/02/13 15:43:31 processing appconfig overrides
Feb 13 15:43:31.244468 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO Proxy environment variables:
Feb 13 15:43:31.248405 amazon-ssm-agent[2087]: 2025/02/13 15:43:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:43:31.248515 amazon-ssm-agent[2087]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 15:43:31.248733 amazon-ssm-agent[2087]: 2025/02/13 15:43:31 processing appconfig overrides
Feb 13 15:43:31.276192 containerd[1896]: time="2025-02-13T15:43:31.276109356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:43:31.284974 containerd[1896]: time="2025-02-13T15:43:31.284553217Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:43:31.284974 containerd[1896]: time="2025-02-13T15:43:31.284612670Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 15:43:31.284974 containerd[1896]: time="2025-02-13T15:43:31.284650330Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.286643879Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.286694773Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.286811366Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.286958236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.287365309Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.287391703Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.287414308Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.287429456Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.287535668Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.287882849Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 15:43:31.288592 containerd[1896]: time="2025-02-13T15:43:31.288251192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 15:43:31.291421 containerd[1896]: time="2025-02-13T15:43:31.288275708Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 15:43:31.291421 containerd[1896]: time="2025-02-13T15:43:31.288477847Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 15:43:31.291421 containerd[1896]: time="2025-02-13T15:43:31.288546464Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 15:43:31.301110 containerd[1896]: time="2025-02-13T15:43:31.301057391Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 15:43:31.301482 containerd[1896]: time="2025-02-13T15:43:31.301144026Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 15:43:31.301482 containerd[1896]: time="2025-02-13T15:43:31.301168041Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 15:43:31.301482 containerd[1896]: time="2025-02-13T15:43:31.301188550Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 15:43:31.301482 containerd[1896]: time="2025-02-13T15:43:31.301210529Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 15:43:31.301686 containerd[1896]: time="2025-02-13T15:43:31.301658995Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 15:43:31.302062 containerd[1896]: time="2025-02-13T15:43:31.302030340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 15:43:31.302244 containerd[1896]: time="2025-02-13T15:43:31.302220513Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 15:43:31.302298 containerd[1896]: time="2025-02-13T15:43:31.302253286Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 15:43:31.302298 containerd[1896]: time="2025-02-13T15:43:31.302276297Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 15:43:31.302367 containerd[1896]: time="2025-02-13T15:43:31.302297909Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 15:43:31.302367 containerd[1896]: time="2025-02-13T15:43:31.302321253Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 15:43:31.302367 containerd[1896]: time="2025-02-13T15:43:31.302341523Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 15:43:31.302568 containerd[1896]: time="2025-02-13T15:43:31.302459148Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 15:43:31.302568 containerd[1896]: time="2025-02-13T15:43:31.302487948Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 15:43:31.302568 containerd[1896]: time="2025-02-13T15:43:31.302507903Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 15:43:31.302568 containerd[1896]: time="2025-02-13T15:43:31.302526104Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 15:43:31.302568 containerd[1896]: time="2025-02-13T15:43:31.302547623Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 15:43:31.302775 containerd[1896]: time="2025-02-13T15:43:31.302578571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.302775 containerd[1896]: time="2025-02-13T15:43:31.302600895Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.302775 containerd[1896]: time="2025-02-13T15:43:31.302620936Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.302775 containerd[1896]: time="2025-02-13T15:43:31.302650534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.302775 containerd[1896]: time="2025-02-13T15:43:31.302670443Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.302775 containerd[1896]: time="2025-02-13T15:43:31.302690834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.302775 containerd[1896]: time="2025-02-13T15:43:31.302708453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303142 containerd[1896]: time="2025-02-13T15:43:31.302777486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303142 containerd[1896]: time="2025-02-13T15:43:31.302801457Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303142 containerd[1896]: time="2025-02-13T15:43:31.302823889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303142 containerd[1896]: time="2025-02-13T15:43:31.302844178Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303142 containerd[1896]: time="2025-02-13T15:43:31.302914793Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303142 containerd[1896]: time="2025-02-13T15:43:31.302936852Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303142 containerd[1896]: time="2025-02-13T15:43:31.302958873Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 15:43:31.303142 containerd[1896]: time="2025-02-13T15:43:31.302990970Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303763 containerd[1896]: time="2025-02-13T15:43:31.303741475Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.303805 containerd[1896]: time="2025-02-13T15:43:31.303769538Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 15:43:31.305104 containerd[1896]: time="2025-02-13T15:43:31.303888400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 15:43:31.305104 containerd[1896]: time="2025-02-13T15:43:31.303919571Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 15:43:31.305104 containerd[1896]: time="2025-02-13T15:43:31.303938110Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 15:43:31.305104 containerd[1896]: time="2025-02-13T15:43:31.303976964Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 15:43:31.305104 containerd[1896]: time="2025-02-13T15:43:31.303994201Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.305104 containerd[1896]: time="2025-02-13T15:43:31.304051008Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 15:43:31.305104 containerd[1896]: time="2025-02-13T15:43:31.304070376Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 15:43:31.305104 containerd[1896]: time="2025-02-13T15:43:31.304123422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 15:43:31.308974 containerd[1896]: time="2025-02-13T15:43:31.306588656Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 15:43:31.308974 containerd[1896]: time="2025-02-13T15:43:31.306672040Z" level=info msg="Connect containerd service"
Feb 13 15:43:31.308974 containerd[1896]: time="2025-02-13T15:43:31.306729189Z" level=info msg="using legacy CRI server"
Feb 13 15:43:31.308974 containerd[1896]: time="2025-02-13T15:43:31.306741359Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 15:43:31.308974 containerd[1896]: time="2025-02-13T15:43:31.308191869Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 15:43:31.311222 containerd[1896]: time="2025-02-13T15:43:31.309745422Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:43:31.311222 containerd[1896]: time="2025-02-13T15:43:31.310155181Z" level=info msg="Start subscribing containerd event"
Feb 13 15:43:31.311222 containerd[1896]: time="2025-02-13T15:43:31.310226794Z" level=info msg="Start recovering state"
Feb 13 15:43:31.311222 containerd[1896]: time="2025-02-13T15:43:31.310316002Z" level=info msg="Start event monitor"
Feb 13 15:43:31.311222 containerd[1896]: time="2025-02-13T15:43:31.310331471Z" level=info msg="Start snapshots syncer"
Feb 13 15:43:31.311222 containerd[1896]: time="2025-02-13T15:43:31.310345766Z" level=info msg="Start cni network conf syncer for default"
Feb 13 15:43:31.311222 containerd[1896]: time="2025-02-13T15:43:31.310356811Z" level=info msg="Start streaming server"
Feb 13 15:43:31.313262 containerd[1896]: time="2025-02-13T15:43:31.311281147Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 15:43:31.313262 containerd[1896]: time="2025-02-13T15:43:31.311343167Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 15:43:31.313262 containerd[1896]: time="2025-02-13T15:43:31.312350338Z" level=info msg="containerd successfully booted in 0.227280s"
Feb 13 15:43:31.311663 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 15:43:31.346949 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO https_proxy:
Feb 13 15:43:31.445341 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO http_proxy:
Feb 13 15:43:31.476387 sshd[2101]: Accepted publickey for core from 139.178.68.195 port 47226 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:31.481648 sshd-session[2101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:31.495810 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 15:43:31.507891 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 15:43:31.532473 systemd-logind[1885]: New session 1 of user core.
Feb 13 15:43:31.546186 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO no_proxy:
Feb 13 15:43:31.555892 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 15:43:31.581202 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 15:43:31.595974 (systemd)[2115]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 15:43:31.600743 systemd-logind[1885]: New session c1 of user core.
Feb 13 15:43:31.644033 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO Checking if agent identity type OnPrem can be assumed
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO Checking if agent identity type EC2 can be assumed
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO Agent will take identity from EC2
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [amazon-ssm-agent] OS: linux, Arch: amd64
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [amazon-ssm-agent] Starting Core Agent
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [Registrar] Starting registrar module
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [EC2Identity] EC2 registration was successful.
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [CredentialRefresher] credentialRefresher has started
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [CredentialRefresher] Starting credentials refresher loop
Feb 13 15:43:31.742361 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Feb 13 15:43:31.743239 amazon-ssm-agent[2087]: 2025-02-13 15:43:31 INFO [CredentialRefresher] Next credential rotation will be in 32.208326119333336 minutes
Feb 13 15:43:31.845096 systemd[2115]: Queued start job for default target default.target.
Feb 13 15:43:31.862654 systemd[2115]: Created slice app.slice - User Application Slice.
Feb 13 15:43:31.862700 systemd[2115]: Reached target paths.target - Paths.
Feb 13 15:43:31.863112 systemd[2115]: Reached target timers.target - Timers.
Feb 13 15:43:31.866148 systemd[2115]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 15:43:31.891327 systemd[2115]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 15:43:31.891487 systemd[2115]: Reached target sockets.target - Sockets.
Feb 13 15:43:31.892027 systemd[2115]: Reached target basic.target - Basic System.
Feb 13 15:43:31.892204 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 15:43:31.892853 systemd[2115]: Reached target default.target - Main User Target.
Feb 13 15:43:31.893430 systemd[2115]: Startup finished in 276ms.
Feb 13 15:43:31.898283 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 15:43:32.064437 systemd[1]: Started sshd@1-172.31.30.13:22-139.178.68.195:47240.service - OpenSSH per-connection server daemon (139.178.68.195:47240).
Feb 13 15:43:32.258531 sshd[2126]: Accepted publickey for core from 139.178.68.195 port 47240 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:32.260618 sshd-session[2126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:32.272325 systemd-logind[1885]: New session 2 of user core.
Feb 13 15:43:32.278260 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 15:43:32.404392 sshd[2128]: Connection closed by 139.178.68.195 port 47240
Feb 13 15:43:32.405048 sshd-session[2126]: pam_unix(sshd:session): session closed for user core
Feb 13 15:43:32.413051 systemd[1]: sshd@1-172.31.30.13:22-139.178.68.195:47240.service: Deactivated successfully.
Feb 13 15:43:32.416864 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 15:43:32.418277 systemd-logind[1885]: Session 2 logged out. Waiting for processes to exit.
Feb 13 15:43:32.420458 systemd-logind[1885]: Removed session 2.
Feb 13 15:43:32.449600 systemd[1]: Started sshd@2-172.31.30.13:22-139.178.68.195:47246.service - OpenSSH per-connection server daemon (139.178.68.195:47246).
Feb 13 15:43:32.625600 sshd[2134]: Accepted publickey for core from 139.178.68.195 port 47246 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:32.627569 sshd-session[2134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:32.639504 systemd-logind[1885]: New session 3 of user core.
Feb 13 15:43:32.649271 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 15:43:32.761446 amazon-ssm-agent[2087]: 2025-02-13 15:43:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Feb 13 15:43:32.772122 sshd[2136]: Connection closed by 139.178.68.195 port 47246
Feb 13 15:43:32.773628 sshd-session[2134]: pam_unix(sshd:session): session closed for user core
Feb 13 15:43:32.781895 systemd[1]: sshd@2-172.31.30.13:22-139.178.68.195:47246.service: Deactivated successfully.
Feb 13 15:43:32.785570 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 15:43:32.789903 systemd-logind[1885]: Session 3 logged out. Waiting for processes to exit.
Feb 13 15:43:32.791932 systemd-logind[1885]: Removed session 3.
Feb 13 15:43:32.862591 amazon-ssm-agent[2087]: 2025-02-13 15:43:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2139) started
Feb 13 15:43:32.964370 amazon-ssm-agent[2087]: 2025-02-13 15:43:32 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Feb 13 15:43:32.981218 ntpd[1878]: Listen normally on 6 eth0 [fe80::42f:81ff:febd:81af%2]:123
Feb 13 15:43:32.981893 ntpd[1878]: 13 Feb 15:43:32 ntpd[1878]: Listen normally on 6 eth0 [fe80::42f:81ff:febd:81af%2]:123
Feb 13 15:43:33.457439 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:43:33.459499 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 15:43:33.461756 systemd[1]: Startup finished in 911ms (kernel) + 8.057s (initrd) + 9.717s (userspace) = 18.686s.
Feb 13 15:43:33.717789 (kubelet)[2158]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:43:35.493825 kubelet[2158]: E0213 15:43:35.493759    2158 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:43:35.497241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:43:35.497438 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:43:35.499189 systemd[1]: kubelet.service: Consumed 983ms CPU time, 241.5M memory peak.
Feb 13 15:43:37.363150 systemd-resolved[1818]: Clock change detected. Flushing caches.
Feb 13 15:43:43.192614 systemd[1]: Started sshd@3-172.31.30.13:22-139.178.68.195:38020.service - OpenSSH per-connection server daemon (139.178.68.195:38020).
Feb 13 15:43:43.381427 sshd[2170]: Accepted publickey for core from 139.178.68.195 port 38020 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:43.382876 sshd-session[2170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:43.391870 systemd-logind[1885]: New session 4 of user core.
Feb 13 15:43:43.402075 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 15:43:43.523188 sshd[2172]: Connection closed by 139.178.68.195 port 38020
Feb 13 15:43:43.523937 sshd-session[2170]: pam_unix(sshd:session): session closed for user core
Feb 13 15:43:43.527972 systemd[1]: sshd@3-172.31.30.13:22-139.178.68.195:38020.service: Deactivated successfully.
Feb 13 15:43:43.530198 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 15:43:43.532180 systemd-logind[1885]: Session 4 logged out. Waiting for processes to exit.
Feb 13 15:43:43.533674 systemd-logind[1885]: Removed session 4.
Feb 13 15:43:43.574702 systemd[1]: Started sshd@4-172.31.30.13:22-139.178.68.195:38036.service - OpenSSH per-connection server daemon (139.178.68.195:38036).
Feb 13 15:43:43.748784 sshd[2178]: Accepted publickey for core from 139.178.68.195 port 38036 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:43.750659 sshd-session[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:43.759106 systemd-logind[1885]: New session 5 of user core.
Feb 13 15:43:43.767131 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 15:43:43.883423 sshd[2180]: Connection closed by 139.178.68.195 port 38036
Feb 13 15:43:43.884576 sshd-session[2178]: pam_unix(sshd:session): session closed for user core
Feb 13 15:43:43.888852 systemd[1]: sshd@4-172.31.30.13:22-139.178.68.195:38036.service: Deactivated successfully.
Feb 13 15:43:43.891199 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 15:43:43.893734 systemd-logind[1885]: Session 5 logged out. Waiting for processes to exit.
Feb 13 15:43:43.895022 systemd-logind[1885]: Removed session 5.
Feb 13 15:43:43.918831 systemd[1]: Started sshd@5-172.31.30.13:22-139.178.68.195:38046.service - OpenSSH per-connection server daemon (139.178.68.195:38046).
Feb 13 15:43:44.137660 sshd[2186]: Accepted publickey for core from 139.178.68.195 port 38046 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:44.145655 sshd-session[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:44.156381 systemd-logind[1885]: New session 6 of user core.
Feb 13 15:43:44.168541 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 15:43:44.313773 sshd[2188]: Connection closed by 139.178.68.195 port 38046
Feb 13 15:43:44.314477 sshd-session[2186]: pam_unix(sshd:session): session closed for user core
Feb 13 15:43:44.325279 systemd[1]: sshd@5-172.31.30.13:22-139.178.68.195:38046.service: Deactivated successfully.
Feb 13 15:43:44.335134 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 15:43:44.369636 systemd-logind[1885]: Session 6 logged out. Waiting for processes to exit.
Feb 13 15:43:44.375283 systemd[1]: Started sshd@6-172.31.30.13:22-139.178.68.195:38058.service - OpenSSH per-connection server daemon (139.178.68.195:38058).
Feb 13 15:43:44.378388 systemd-logind[1885]: Removed session 6.
Feb 13 15:43:44.552930 sshd[2193]: Accepted publickey for core from 139.178.68.195 port 38058 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:44.554676 sshd-session[2193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:44.560982 systemd-logind[1885]: New session 7 of user core.
Feb 13 15:43:44.568153 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 15:43:44.682949 sudo[2197]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 15:43:44.683439 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:43:44.695882 sudo[2197]: pam_unix(sudo:session): session closed for user root
Feb 13 15:43:44.718251 sshd[2196]: Connection closed by 139.178.68.195 port 38058
Feb 13 15:43:44.719959 sshd-session[2193]: pam_unix(sshd:session): session closed for user core
Feb 13 15:43:44.724778 systemd[1]: sshd@6-172.31.30.13:22-139.178.68.195:38058.service: Deactivated successfully.
Feb 13 15:43:44.727221 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 15:43:44.729818 systemd-logind[1885]: Session 7 logged out. Waiting for processes to exit.
Feb 13 15:43:44.733527 systemd-logind[1885]: Removed session 7.
Feb 13 15:43:44.757569 systemd[1]: Started sshd@7-172.31.30.13:22-139.178.68.195:38074.service - OpenSSH per-connection server daemon (139.178.68.195:38074).
Feb 13 15:43:44.925952 sshd[2203]: Accepted publickey for core from 139.178.68.195 port 38074 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:44.928024 sshd-session[2203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:44.936199 systemd-logind[1885]: New session 8 of user core.
Feb 13 15:43:44.945051 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 15:43:45.050681 sudo[2207]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 15:43:45.051096 sudo[2207]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:43:45.056903 sudo[2207]: pam_unix(sudo:session): session closed for user root
Feb 13 15:43:45.063873 sudo[2206]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 15:43:45.064539 sudo[2206]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:43:45.082140 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 15:43:45.146508 augenrules[2229]: No rules
Feb 13 15:43:45.148203 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 15:43:45.148435 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 15:43:45.152102 sudo[2206]: pam_unix(sudo:session): session closed for user root
Feb 13 15:43:45.174684 sshd[2205]: Connection closed by 139.178.68.195 port 38074
Feb 13 15:43:45.175713 sshd-session[2203]: pam_unix(sshd:session): session closed for user core
Feb 13 15:43:45.188344 systemd[1]: sshd@7-172.31.30.13:22-139.178.68.195:38074.service: Deactivated successfully.
Feb 13 15:43:45.197082 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 15:43:45.235096 systemd-logind[1885]: Session 8 logged out. Waiting for processes to exit.
Feb 13 15:43:45.247338 systemd[1]: Started sshd@8-172.31.30.13:22-139.178.68.195:38084.service - OpenSSH per-connection server daemon (139.178.68.195:38084).
Feb 13 15:43:45.256964 systemd-logind[1885]: Removed session 8.
Feb 13 15:43:45.465814 sshd[2237]: Accepted publickey for core from 139.178.68.195 port 38084 ssh2: RSA SHA256:yvubg5TE4tQn1Ceu414+Zp2Lz0TbCrCQ13qkOHSJoSg
Feb 13 15:43:45.469634 sshd-session[2237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 15:43:45.483364 systemd-logind[1885]: New session 9 of user core.
Feb 13 15:43:45.495311 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 15:43:45.608428 sudo[2241]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 15:43:45.610167 sudo[2241]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 15:43:46.130231 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 15:43:46.138341 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:43:46.407291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:43:46.407635 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 15:43:46.471521 kubelet[2263]: E0213 15:43:46.471436    2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 15:43:46.476098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 15:43:46.476275 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 15:43:46.476744 systemd[1]: kubelet.service: Consumed 171ms CPU time, 97.3M memory peak.
Feb 13 15:43:46.890946 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:43:46.891326 systemd[1]: kubelet.service: Consumed 171ms CPU time, 97.3M memory peak.
Feb 13 15:43:46.901371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:43:46.989413 systemd[1]: Reload requested from client PID 2288 ('systemctl') (unit session-9.scope)...
Feb 13 15:43:46.989434 systemd[1]: Reloading...
Feb 13 15:43:47.184820 zram_generator::config[2330]: No configuration found.
Feb 13 15:43:47.340299 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 15:43:47.491919 systemd[1]: Reloading finished in 501 ms.
Feb 13 15:43:47.607093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:43:47.615555 (kubelet)[2385]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:43:47.618899 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:43:47.619447 systemd[1]: kubelet.service: Deactivated successfully.
Feb 13 15:43:47.619767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:43:47.619958 systemd[1]: kubelet.service: Consumed 122ms CPU time, 82.7M memory peak.
Feb 13 15:43:47.628426 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 15:43:47.901131 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 15:43:47.909397 (kubelet)[2396]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 15:43:47.982821 kubelet[2396]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:43:47.985032 kubelet[2396]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 15:43:47.985032 kubelet[2396]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 15:43:47.985870 kubelet[2396]: I0213 15:43:47.985026    2396 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 15:43:48.600385 kubelet[2396]: I0213 15:43:48.600336    2396 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Feb 13 15:43:48.600385 kubelet[2396]: I0213 15:43:48.600371    2396 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 15:43:48.601034 kubelet[2396]: I0213 15:43:48.601005    2396 server.go:929] "Client rotation is on, will bootstrap in background"
Feb 13 15:43:48.639367 kubelet[2396]: I0213 15:43:48.638990    2396 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 15:43:48.650711 kubelet[2396]: E0213 15:43:48.650672    2396 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Feb 13 15:43:48.650925 kubelet[2396]: I0213 15:43:48.650905    2396 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Feb 13 15:43:48.658822 kubelet[2396]: I0213 15:43:48.658334    2396 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 15:43:48.658822 kubelet[2396]: I0213 15:43:48.658483    2396 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Feb 13 15:43:48.658822 kubelet[2396]: I0213 15:43:48.658636    2396 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 15:43:48.659444 kubelet[2396]: I0213 15:43:48.658668    2396 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.30.13","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Feb 13 15:43:48.659914 kubelet[2396]: I0213 15:43:48.659896    2396 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 15:43:48.660234 kubelet[2396]: I0213 15:43:48.660121    2396 container_manager_linux.go:300] "Creating device plugin manager"
Feb 13 15:43:48.660544 kubelet[2396]: I0213 15:43:48.660529    2396 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:43:48.665026 kubelet[2396]: I0213 15:43:48.665005    2396 kubelet.go:408] "Attempting to sync node with API server"
Feb 13 15:43:48.665528 kubelet[2396]: I0213 15:43:48.665075    2396 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 15:43:48.665528 kubelet[2396]: I0213 15:43:48.665112    2396 kubelet.go:314] "Adding apiserver pod source"
Feb 13 15:43:48.665528 kubelet[2396]: I0213 15:43:48.665125    2396 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 15:43:48.667625 kubelet[2396]: E0213 15:43:48.666926    2396 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:48.673372 kubelet[2396]: E0213 15:43:48.673235    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:48.674842 kubelet[2396]: I0213 15:43:48.674109    2396 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 15:43:48.674842 kubelet[2396]: W0213 15:43:48.674612    2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 15:43:48.674842 kubelet[2396]: E0213 15:43:48.674647    2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
Feb 13 15:43:48.675073 kubelet[2396]: W0213 15:43:48.675047    2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.30.13" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 15:43:48.675291 kubelet[2396]: E0213 15:43:48.675174    2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.30.13\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
Feb 13 15:43:48.677569 kubelet[2396]: I0213 15:43:48.677451    2396 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 15:43:48.679213 kubelet[2396]: W0213 15:43:48.679177    2396 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 15:43:48.680326 kubelet[2396]: I0213 15:43:48.680128    2396 server.go:1269] "Started kubelet"
Feb 13 15:43:48.682622 kubelet[2396]: I0213 15:43:48.682594    2396 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 15:43:48.692669 kubelet[2396]: I0213 15:43:48.691185    2396 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 15:43:48.692825 kubelet[2396]: I0213 15:43:48.692780    2396 server.go:460] "Adding debug handlers to kubelet server"
Feb 13 15:43:48.694881 kubelet[2396]: I0213 15:43:48.694817    2396 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 15:43:48.695912 kubelet[2396]: I0213 15:43:48.695258    2396 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 15:43:48.695912 kubelet[2396]: I0213 15:43:48.695562    2396 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Feb 13 15:43:48.697596 kubelet[2396]: I0213 15:43:48.697567    2396 volume_manager.go:289] "Starting Kubelet Volume Manager"
Feb 13 15:43:48.698013 kubelet[2396]: E0213 15:43:48.697984    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:48.698520 kubelet[2396]: I0213 15:43:48.698484    2396 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Feb 13 15:43:48.698590 kubelet[2396]: I0213 15:43:48.698542    2396 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 15:43:48.701307 kubelet[2396]: I0213 15:43:48.701265    2396 factory.go:221] Registration of the systemd container factory successfully
Feb 13 15:43:48.701504 kubelet[2396]: I0213 15:43:48.701389    2396 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 15:43:48.706839 kubelet[2396]: E0213 15:43:48.706510    2396 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 15:43:48.710353 kubelet[2396]: I0213 15:43:48.707262    2396 factory.go:221] Registration of the containerd container factory successfully
Feb 13 15:43:48.728127 kubelet[2396]: E0213 15:43:48.720537    2396 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.13.1823cef40cf87f39  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.13,UID:172.31.30.13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-02-13 15:43:48.680097593 +0000 UTC m=+0.761979881,LastTimestamp:2025-02-13 15:43:48.680097593 +0000 UTC m=+0.761979881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}"
Feb 13 15:43:48.730812 kubelet[2396]: W0213 15:43:48.728750    2396 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 13 15:43:48.730812 kubelet[2396]: E0213 15:43:48.728791    2396 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
Feb 13 15:43:48.730812 kubelet[2396]: E0213 15:43:48.728905    2396 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.30.13\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Feb 13 15:43:48.735691 kubelet[2396]: E0213 15:43:48.735569    2396 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.30.13.1823cef40e8b414c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.30.13,UID:172.31.30.13,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.30.13,},FirstTimestamp:2025-02-13 15:43:48.706492748 +0000 UTC m=+0.788375043,LastTimestamp:2025-02-13 15:43:48.706492748 +0000 UTC m=+0.788375043,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.30.13,}"
Feb 13 15:43:48.741971 kubelet[2396]: I0213 15:43:48.741940    2396 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 15:43:48.742269 kubelet[2396]: I0213 15:43:48.742247    2396 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 15:43:48.742427 kubelet[2396]: I0213 15:43:48.742418    2396 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 15:43:48.745025 kubelet[2396]: I0213 15:43:48.745004    2396 policy_none.go:49] "None policy: Start"
Feb 13 15:43:48.746444 kubelet[2396]: I0213 15:43:48.746428    2396 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 15:43:48.747529 kubelet[2396]: I0213 15:43:48.747516    2396 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 15:43:48.768380 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 15:43:48.786394 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 15:43:48.795770 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 15:43:48.798354 kubelet[2396]: E0213 15:43:48.798051    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:48.805851 kubelet[2396]: I0213 15:43:48.805104    2396 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 15:43:48.805851 kubelet[2396]: I0213 15:43:48.805455    2396 eviction_manager.go:189] "Eviction manager: starting control loop"
Feb 13 15:43:48.805851 kubelet[2396]: I0213 15:43:48.805470    2396 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 15:43:48.806064 kubelet[2396]: I0213 15:43:48.805983    2396 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 15:43:48.809953 kubelet[2396]: E0213 15:43:48.809928    2396 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.30.13\" not found"
Feb 13 15:43:48.835441 kubelet[2396]: I0213 15:43:48.835393    2396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 15:43:48.838967 kubelet[2396]: I0213 15:43:48.838567    2396 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 15:43:48.838967 kubelet[2396]: I0213 15:43:48.838607    2396 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 15:43:48.838967 kubelet[2396]: I0213 15:43:48.838698    2396 kubelet.go:2321] "Starting kubelet main sync loop"
Feb 13 15:43:48.838967 kubelet[2396]: E0213 15:43:48.838864    2396 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Feb 13 15:43:48.908349 kubelet[2396]: I0213 15:43:48.907304    2396 kubelet_node_status.go:72] "Attempting to register node" node="172.31.30.13"
Feb 13 15:43:48.915496 kubelet[2396]: I0213 15:43:48.915458    2396 kubelet_node_status.go:75] "Successfully registered node" node="172.31.30.13"
Feb 13 15:43:48.915496 kubelet[2396]: E0213 15:43:48.915492    2396 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.30.13\": node \"172.31.30.13\" not found"
Feb 13 15:43:48.933313 kubelet[2396]: E0213 15:43:48.933266    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:48.973725 sudo[2241]: pam_unix(sudo:session): session closed for user root
Feb 13 15:43:49.003259 sshd[2240]: Connection closed by 139.178.68.195 port 38084
Feb 13 15:43:49.003714 sshd-session[2237]: pam_unix(sshd:session): session closed for user core
Feb 13 15:43:49.013373 systemd[1]: sshd@8-172.31.30.13:22-139.178.68.195:38084.service: Deactivated successfully.
Feb 13 15:43:49.017839 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 15:43:49.018184 systemd[1]: session-9.scope: Consumed 519ms CPU time, 76.2M memory peak.
Feb 13 15:43:49.020991 systemd-logind[1885]: Session 9 logged out. Waiting for processes to exit.
Feb 13 15:43:49.022836 systemd-logind[1885]: Removed session 9.
Feb 13 15:43:49.033728 kubelet[2396]: E0213 15:43:49.033690    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.134165 kubelet[2396]: E0213 15:43:49.134123    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.234882 kubelet[2396]: E0213 15:43:49.234734    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.335448 kubelet[2396]: E0213 15:43:49.335407    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.436427 kubelet[2396]: E0213 15:43:49.436327    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.537392 kubelet[2396]: E0213 15:43:49.537177    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.603931 kubelet[2396]: I0213 15:43:49.603874    2396 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 13 15:43:49.604122 kubelet[2396]: W0213 15:43:49.604090    2396 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Feb 13 15:43:49.638397 kubelet[2396]: E0213 15:43:49.638261    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.673726 kubelet[2396]: E0213 15:43:49.673678    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:49.738783 kubelet[2396]: E0213 15:43:49.738727    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.839108 kubelet[2396]: E0213 15:43:49.838845    2396 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.30.13\" not found"
Feb 13 15:43:49.940501 kubelet[2396]: I0213 15:43:49.940442    2396 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 13 15:43:49.940979 containerd[1896]: time="2025-02-13T15:43:49.940772684Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 15:43:49.941934 kubelet[2396]: I0213 15:43:49.941406    2396 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 13 15:43:50.668632 kubelet[2396]: I0213 15:43:50.668584    2396 apiserver.go:52] "Watching apiserver"
Feb 13 15:43:50.673897 kubelet[2396]: E0213 15:43:50.673860    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:50.676384 kubelet[2396]: E0213 15:43:50.675562    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:43:50.693519 systemd[1]: Created slice kubepods-besteffort-pod19a347ef_053e_4125_9ac3_cf8bf1913eee.slice - libcontainer container kubepods-besteffort-pod19a347ef_053e_4125_9ac3_cf8bf1913eee.slice.
Feb 13 15:43:50.699594 kubelet[2396]: I0213 15:43:50.699562    2396 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Feb 13 15:43:50.711083 kubelet[2396]: I0213 15:43:50.710114    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/46edc617-fb21-45c4-9c8d-62047fdff427-xtables-lock\") pod \"kube-proxy-96ngx\" (UID: \"46edc617-fb21-45c4-9c8d-62047fdff427\") " pod="kube-system/kube-proxy-96ngx"
Feb 13 15:43:50.711083 kubelet[2396]: I0213 15:43:50.710155    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nspd9\" (UniqueName: \"kubernetes.io/projected/46edc617-fb21-45c4-9c8d-62047fdff427-kube-api-access-nspd9\") pod \"kube-proxy-96ngx\" (UID: \"46edc617-fb21-45c4-9c8d-62047fdff427\") " pod="kube-system/kube-proxy-96ngx"
Feb 13 15:43:50.711083 kubelet[2396]: I0213 15:43:50.710185    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-lib-modules\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.711083 kubelet[2396]: I0213 15:43:50.710211    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0c126d30-b7f3-49c7-adb0-7d602f0e81f4-registration-dir\") pod \"csi-node-driver-wpjns\" (UID: \"0c126d30-b7f3-49c7-adb0-7d602f0e81f4\") " pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:43:50.711083 kubelet[2396]: I0213 15:43:50.710236    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-var-run-calico\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.711360 kubelet[2396]: I0213 15:43:50.710257    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-var-lib-calico\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.711360 kubelet[2396]: I0213 15:43:50.710279    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-flexvol-driver-host\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.711360 kubelet[2396]: I0213 15:43:50.710300    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b44n4\" (UniqueName: \"kubernetes.io/projected/19a347ef-053e-4125-9ac3-cf8bf1913eee-kube-api-access-b44n4\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.711360 kubelet[2396]: I0213 15:43:50.710326    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19a347ef-053e-4125-9ac3-cf8bf1913eee-node-certs\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.711360 kubelet[2396]: I0213 15:43:50.710350    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-bin-dir\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.712107 kubelet[2396]: I0213 15:43:50.710374    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0c126d30-b7f3-49c7-adb0-7d602f0e81f4-varrun\") pod \"csi-node-driver-wpjns\" (UID: \"0c126d30-b7f3-49c7-adb0-7d602f0e81f4\") " pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:43:50.712107 kubelet[2396]: I0213 15:43:50.710397    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0c126d30-b7f3-49c7-adb0-7d602f0e81f4-socket-dir\") pod \"csi-node-driver-wpjns\" (UID: \"0c126d30-b7f3-49c7-adb0-7d602f0e81f4\") " pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:43:50.712107 kubelet[2396]: I0213 15:43:50.710420    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19a347ef-053e-4125-9ac3-cf8bf1913eee-tigera-ca-bundle\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.712107 kubelet[2396]: I0213 15:43:50.710443    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-net-dir\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.712107 kubelet[2396]: I0213 15:43:50.710474    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-log-dir\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.712556 kubelet[2396]: I0213 15:43:50.710497    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0c126d30-b7f3-49c7-adb0-7d602f0e81f4-kubelet-dir\") pod \"csi-node-driver-wpjns\" (UID: \"0c126d30-b7f3-49c7-adb0-7d602f0e81f4\") " pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:43:50.712556 kubelet[2396]: I0213 15:43:50.710519    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/46edc617-fb21-45c4-9c8d-62047fdff427-kube-proxy\") pod \"kube-proxy-96ngx\" (UID: \"46edc617-fb21-45c4-9c8d-62047fdff427\") " pod="kube-system/kube-proxy-96ngx"
Feb 13 15:43:50.712556 kubelet[2396]: I0213 15:43:50.710544    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46edc617-fb21-45c4-9c8d-62047fdff427-lib-modules\") pod \"kube-proxy-96ngx\" (UID: \"46edc617-fb21-45c4-9c8d-62047fdff427\") " pod="kube-system/kube-proxy-96ngx"
Feb 13 15:43:50.712556 kubelet[2396]: I0213 15:43:50.710568    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-xtables-lock\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.712556 kubelet[2396]: I0213 15:43:50.710599    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-policysync\") pod \"calico-node-whtdb\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") " pod="calico-system/calico-node-whtdb"
Feb 13 15:43:50.713733 kubelet[2396]: I0213 15:43:50.710740    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l97rh\" (UniqueName: \"kubernetes.io/projected/0c126d30-b7f3-49c7-adb0-7d602f0e81f4-kube-api-access-l97rh\") pod \"csi-node-driver-wpjns\" (UID: \"0c126d30-b7f3-49c7-adb0-7d602f0e81f4\") " pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:43:50.718310 systemd[1]: Created slice kubepods-besteffort-pod46edc617_fb21_45c4_9c8d_62047fdff427.slice - libcontainer container kubepods-besteffort-pod46edc617_fb21_45c4_9c8d_62047fdff427.slice.
Feb 13 15:43:50.826851 kubelet[2396]: E0213 15:43:50.826353    2396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:43:50.826851 kubelet[2396]: W0213 15:43:50.826381    2396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:43:50.826851 kubelet[2396]: E0213 15:43:50.826415    2396 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:43:50.869960 kubelet[2396]: E0213 15:43:50.869853    2396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:43:50.869960 kubelet[2396]: W0213 15:43:50.869940    2396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:43:50.869960 kubelet[2396]: E0213 15:43:50.869976    2396 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:43:50.871284 kubelet[2396]: E0213 15:43:50.870498    2396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:43:50.871284 kubelet[2396]: W0213 15:43:50.870515    2396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:43:50.871284 kubelet[2396]: E0213 15:43:50.870531    2396 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:43:50.871593 kubelet[2396]: E0213 15:43:50.871564    2396 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 15:43:50.871593 kubelet[2396]: W0213 15:43:50.871576    2396 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 15:43:50.872204 kubelet[2396]: E0213 15:43:50.871591    2396 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 15:43:51.016920 containerd[1896]: time="2025-02-13T15:43:51.016682790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-whtdb,Uid:19a347ef-053e-4125-9ac3-cf8bf1913eee,Namespace:calico-system,Attempt:0,}"
Feb 13 15:43:51.026115 containerd[1896]: time="2025-02-13T15:43:51.022777227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-96ngx,Uid:46edc617-fb21-45c4-9c8d-62047fdff427,Namespace:kube-system,Attempt:0,}"
Feb 13 15:43:51.663730 containerd[1896]: time="2025-02-13T15:43:51.663609914Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:43:51.665951 containerd[1896]: time="2025-02-13T15:43:51.665910060Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:43:51.667245 containerd[1896]: time="2025-02-13T15:43:51.667210896Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056"
Feb 13 15:43:51.668811 containerd[1896]: time="2025-02-13T15:43:51.668762526Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:43:51.669653 containerd[1896]: time="2025-02-13T15:43:51.669592444Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 15:43:51.672096 containerd[1896]: time="2025-02-13T15:43:51.672040588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 15:43:51.674568 containerd[1896]: time="2025-02-13T15:43:51.673782255Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 656.905245ms"
Feb 13 15:43:51.674660 kubelet[2396]: E0213 15:43:51.674496    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:51.677551 containerd[1896]: time="2025-02-13T15:43:51.677484510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.165482ms"
Feb 13 15:43:51.830438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759932259.mount: Deactivated successfully.
Feb 13 15:43:51.840400 kubelet[2396]: E0213 15:43:51.839982    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:43:51.912978 containerd[1896]: time="2025-02-13T15:43:51.909400777Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:43:51.912978 containerd[1896]: time="2025-02-13T15:43:51.912543498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:43:51.912978 containerd[1896]: time="2025-02-13T15:43:51.912573169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:43:51.912978 containerd[1896]: time="2025-02-13T15:43:51.912696200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:43:51.924481 containerd[1896]: time="2025-02-13T15:43:51.923944369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:43:51.924481 containerd[1896]: time="2025-02-13T15:43:51.924015064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:43:51.924481 containerd[1896]: time="2025-02-13T15:43:51.924044754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:43:51.924481 containerd[1896]: time="2025-02-13T15:43:51.924152539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:43:52.100064 systemd[1]: Started cri-containerd-d69eb6bb16c0e41c47d22f4f60582f5403693c1b9c718ecb29033355e7bf664a.scope - libcontainer container d69eb6bb16c0e41c47d22f4f60582f5403693c1b9c718ecb29033355e7bf664a.
Feb 13 15:43:52.102396 systemd[1]: Started cri-containerd-e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe.scope - libcontainer container e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe.
Feb 13 15:43:52.173762 containerd[1896]: time="2025-02-13T15:43:52.173629116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-whtdb,Uid:19a347ef-053e-4125-9ac3-cf8bf1913eee,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\""
Feb 13 15:43:52.184216 containerd[1896]: time="2025-02-13T15:43:52.177583454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Feb 13 15:43:52.195511 containerd[1896]: time="2025-02-13T15:43:52.195277852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-96ngx,Uid:46edc617-fb21-45c4-9c8d-62047fdff427,Namespace:kube-system,Attempt:0,} returns sandbox id \"d69eb6bb16c0e41c47d22f4f60582f5403693c1b9c718ecb29033355e7bf664a\""
Feb 13 15:43:52.675649 kubelet[2396]: E0213 15:43:52.675023    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:53.675449 kubelet[2396]: E0213 15:43:53.675418    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:53.840725 kubelet[2396]: E0213 15:43:53.840658    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:43:53.866384 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2540940607.mount: Deactivated successfully.
Feb 13 15:43:54.015417 containerd[1896]: time="2025-02-13T15:43:54.015270533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:43:54.016769 containerd[1896]: time="2025-02-13T15:43:54.016603208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343"
Feb 13 15:43:54.019063 containerd[1896]: time="2025-02-13T15:43:54.017751729Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:43:54.027200 containerd[1896]: time="2025-02-13T15:43:54.025641757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:43:54.027200 containerd[1896]: time="2025-02-13T15:43:54.026988735Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.849350563s"
Feb 13 15:43:54.027200 containerd[1896]: time="2025-02-13T15:43:54.027030901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\""
Feb 13 15:43:54.029343 containerd[1896]: time="2025-02-13T15:43:54.029302264Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\""
Feb 13 15:43:54.030725 containerd[1896]: time="2025-02-13T15:43:54.030690740Z" level=info msg="CreateContainer within sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Feb 13 15:43:54.056097 containerd[1896]: time="2025-02-13T15:43:54.056043736Z" level=info msg="CreateContainer within sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\""
Feb 13 15:43:54.058819 containerd[1896]: time="2025-02-13T15:43:54.057081018Z" level=info msg="StartContainer for \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\""
Feb 13 15:43:54.116173 systemd[1]: Started cri-containerd-8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de.scope - libcontainer container 8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de.
Feb 13 15:43:54.165474 containerd[1896]: time="2025-02-13T15:43:54.164916546Z" level=info msg="StartContainer for \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\" returns successfully"
Feb 13 15:43:54.187025 systemd[1]: cri-containerd-8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de.scope: Deactivated successfully.
Feb 13 15:43:54.263144 containerd[1896]: time="2025-02-13T15:43:54.263080184Z" level=info msg="shim disconnected" id=8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de namespace=k8s.io
Feb 13 15:43:54.263144 containerd[1896]: time="2025-02-13T15:43:54.263137239Z" level=warning msg="cleaning up after shim disconnected" id=8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de namespace=k8s.io
Feb 13 15:43:54.263144 containerd[1896]: time="2025-02-13T15:43:54.263148452Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:43:54.677983 kubelet[2396]: E0213 15:43:54.677922    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:54.791025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de-rootfs.mount: Deactivated successfully.
Feb 13 15:43:55.478906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2545806093.mount: Deactivated successfully.
Feb 13 15:43:55.678877 kubelet[2396]: E0213 15:43:55.678827    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:55.839902 kubelet[2396]: E0213 15:43:55.839762    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:43:56.093636 containerd[1896]: time="2025-02-13T15:43:56.093439087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:43:56.094600 containerd[1896]: time="2025-02-13T15:43:56.094548470Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108"
Feb 13 15:43:56.096526 containerd[1896]: time="2025-02-13T15:43:56.096252413Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:43:56.099969 containerd[1896]: time="2025-02-13T15:43:56.099919232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:43:56.100694 containerd[1896]: time="2025-02-13T15:43:56.100654747Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 2.071313884s"
Feb 13 15:43:56.100871 containerd[1896]: time="2025-02-13T15:43:56.100699274Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\""
Feb 13 15:43:56.102830 containerd[1896]: time="2025-02-13T15:43:56.102781557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Feb 13 15:43:56.104272 containerd[1896]: time="2025-02-13T15:43:56.104236337Z" level=info msg="CreateContainer within sandbox \"d69eb6bb16c0e41c47d22f4f60582f5403693c1b9c718ecb29033355e7bf664a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 15:43:56.125028 containerd[1896]: time="2025-02-13T15:43:56.124870479Z" level=info msg="CreateContainer within sandbox \"d69eb6bb16c0e41c47d22f4f60582f5403693c1b9c718ecb29033355e7bf664a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e00a3a53f026f4e468ee4f1cbeffa6f87cbc9f186b03baca6ab8b5d449bd8800\""
Feb 13 15:43:56.126339 containerd[1896]: time="2025-02-13T15:43:56.126306805Z" level=info msg="StartContainer for \"e00a3a53f026f4e468ee4f1cbeffa6f87cbc9f186b03baca6ab8b5d449bd8800\""
Feb 13 15:43:56.167263 systemd[1]: run-containerd-runc-k8s.io-e00a3a53f026f4e468ee4f1cbeffa6f87cbc9f186b03baca6ab8b5d449bd8800-runc.Wx2az0.mount: Deactivated successfully.
Feb 13 15:43:56.179062 systemd[1]: Started cri-containerd-e00a3a53f026f4e468ee4f1cbeffa6f87cbc9f186b03baca6ab8b5d449bd8800.scope - libcontainer container e00a3a53f026f4e468ee4f1cbeffa6f87cbc9f186b03baca6ab8b5d449bd8800.
Feb 13 15:43:56.217185 containerd[1896]: time="2025-02-13T15:43:56.217141732Z" level=info msg="StartContainer for \"e00a3a53f026f4e468ee4f1cbeffa6f87cbc9f186b03baca6ab8b5d449bd8800\" returns successfully"
Feb 13 15:43:56.690351 kubelet[2396]: E0213 15:43:56.686752    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:56.896133 kubelet[2396]: I0213 15:43:56.896039    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-96ngx" podStartSLOduration=3.992780097 podStartE2EDuration="7.896025339s" podCreationTimestamp="2025-02-13 15:43:49 +0000 UTC" firstStartedPulling="2025-02-13 15:43:52.198721628 +0000 UTC m=+4.280603914" lastFinishedPulling="2025-02-13 15:43:56.101966872 +0000 UTC m=+8.183849156" observedRunningTime="2025-02-13 15:43:56.895871997 +0000 UTC m=+8.977754289" watchObservedRunningTime="2025-02-13 15:43:56.896025339 +0000 UTC m=+8.977907632"
Feb 13 15:43:57.690833 kubelet[2396]: E0213 15:43:57.690296    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:57.840277 kubelet[2396]: E0213 15:43:57.840225    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:43:58.690447 kubelet[2396]: E0213 15:43:58.690409    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:59.692202 kubelet[2396]: E0213 15:43:59.692130    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:43:59.839070 kubelet[2396]: E0213 15:43:59.839025    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:00.692645 kubelet[2396]: E0213 15:44:00.692587    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:00.780863 containerd[1896]: time="2025-02-13T15:44:00.780620965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:00.784833 containerd[1896]: time="2025-02-13T15:44:00.782022110Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154"
Feb 13 15:44:00.784833 containerd[1896]: time="2025-02-13T15:44:00.784718669Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:00.791646 containerd[1896]: time="2025-02-13T15:44:00.791595609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:00.792807 containerd[1896]: time="2025-02-13T15:44:00.792597488Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.689754479s"
Feb 13 15:44:00.792807 containerd[1896]: time="2025-02-13T15:44:00.792644230Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\""
Feb 13 15:44:00.797895 containerd[1896]: time="2025-02-13T15:44:00.797855105Z" level=info msg="CreateContainer within sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 15:44:00.845703 containerd[1896]: time="2025-02-13T15:44:00.845643097Z" level=info msg="CreateContainer within sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\""
Feb 13 15:44:00.854197 containerd[1896]: time="2025-02-13T15:44:00.850526270Z" level=info msg="StartContainer for \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\""
Feb 13 15:44:00.914061 systemd[1]: Started cri-containerd-d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808.scope - libcontainer container d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808.
Feb 13 15:44:00.953781 containerd[1896]: time="2025-02-13T15:44:00.953636413Z" level=info msg="StartContainer for \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\" returns successfully"
Feb 13 15:44:01.327985 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 13 15:44:01.693906 kubelet[2396]: E0213 15:44:01.692965    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:01.839685 kubelet[2396]: E0213 15:44:01.839627    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:02.082394 containerd[1896]: time="2025-02-13T15:44:02.082165533Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 15:44:02.087667 systemd[1]: cri-containerd-d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808.scope: Deactivated successfully.
Feb 13 15:44:02.088647 systemd[1]: cri-containerd-d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808.scope: Consumed 552ms CPU time, 168.9M memory peak, 151M written to disk.
Feb 13 15:44:02.136576 kubelet[2396]: I0213 15:44:02.136156    2396 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Feb 13 15:44:02.179738 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808-rootfs.mount: Deactivated successfully.
Feb 13 15:44:02.352302 containerd[1896]: time="2025-02-13T15:44:02.352146043Z" level=info msg="shim disconnected" id=d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808 namespace=k8s.io
Feb 13 15:44:02.352302 containerd[1896]: time="2025-02-13T15:44:02.352212230Z" level=warning msg="cleaning up after shim disconnected" id=d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808 namespace=k8s.io
Feb 13 15:44:02.352302 containerd[1896]: time="2025-02-13T15:44:02.352225468Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:44:02.693656 kubelet[2396]: E0213 15:44:02.693514    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:02.950714 containerd[1896]: time="2025-02-13T15:44:02.946260637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Feb 13 15:44:03.694517 kubelet[2396]: E0213 15:44:03.694469    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:03.849776 systemd[1]: Created slice kubepods-besteffort-pod0c126d30_b7f3_49c7_adb0_7d602f0e81f4.slice - libcontainer container kubepods-besteffort-pod0c126d30_b7f3_49c7_adb0_7d602f0e81f4.slice.
Feb 13 15:44:03.866435 containerd[1896]: time="2025-02-13T15:44:03.866388843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:0,}"
Feb 13 15:44:04.060320 containerd[1896]: time="2025-02-13T15:44:04.060179800Z" level=error msg="Failed to destroy network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:04.061061 containerd[1896]: time="2025-02-13T15:44:04.060550572Z" level=error msg="encountered an error cleaning up failed sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:04.061061 containerd[1896]: time="2025-02-13T15:44:04.060716647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:04.064595 kubelet[2396]: E0213 15:44:04.061167    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:04.064595 kubelet[2396]: E0213 15:44:04.064406    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:04.064595 kubelet[2396]: E0213 15:44:04.064458    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:04.064813 kubelet[2396]: E0213 15:44:04.064545    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:04.066299 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556-shm.mount: Deactivated successfully.
Feb 13 15:44:04.696817 kubelet[2396]: E0213 15:44:04.695477    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:04.954320 kubelet[2396]: I0213 15:44:04.954201    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556"
Feb 13 15:44:04.960435 containerd[1896]: time="2025-02-13T15:44:04.960327136Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:04.960914 containerd[1896]: time="2025-02-13T15:44:04.960665388Z" level=info msg="Ensure that sandbox 98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556 in task-service has been cleanup successfully"
Feb 13 15:44:04.960914 containerd[1896]: time="2025-02-13T15:44:04.960904219Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:04.961011 containerd[1896]: time="2025-02-13T15:44:04.960923894Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:04.964926 containerd[1896]: time="2025-02-13T15:44:04.961915356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:1,}"
Feb 13 15:44:04.968761 systemd[1]: run-netns-cni\x2d91479aa3\x2d23c7\x2d0bee\x2d3027\x2dda485462a743.mount: Deactivated successfully.
Feb 13 15:44:05.145586 containerd[1896]: time="2025-02-13T15:44:05.145527925Z" level=error msg="Failed to destroy network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:05.162069 containerd[1896]: time="2025-02-13T15:44:05.158752945Z" level=error msg="encountered an error cleaning up failed sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:05.162069 containerd[1896]: time="2025-02-13T15:44:05.160094108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:05.161677 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b-shm.mount: Deactivated successfully.
Feb 13 15:44:05.162363 kubelet[2396]: E0213 15:44:05.161022    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:05.162363 kubelet[2396]: E0213 15:44:05.161090    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:05.162363 kubelet[2396]: E0213 15:44:05.161115    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:05.162517 kubelet[2396]: E0213 15:44:05.161165    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:05.514587 systemd[1]: Created slice kubepods-besteffort-podd086d491_1034_4eac_b9c1_32c930fb005f.slice - libcontainer container kubepods-besteffort-podd086d491_1034_4eac_b9c1_32c930fb005f.slice.
Feb 13 15:44:05.525705 kubelet[2396]: I0213 15:44:05.525264    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbz2x\" (UniqueName: \"kubernetes.io/projected/d086d491-1034-4eac-b9c1-32c930fb005f-kube-api-access-kbz2x\") pod \"calico-typha-7584554fbb-ln4wz\" (UID: \"d086d491-1034-4eac-b9c1-32c930fb005f\") " pod="calico-system/calico-typha-7584554fbb-ln4wz"
Feb 13 15:44:05.525705 kubelet[2396]: I0213 15:44:05.525320    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d086d491-1034-4eac-b9c1-32c930fb005f-typha-certs\") pod \"calico-typha-7584554fbb-ln4wz\" (UID: \"d086d491-1034-4eac-b9c1-32c930fb005f\") " pod="calico-system/calico-typha-7584554fbb-ln4wz"
Feb 13 15:44:05.525705 kubelet[2396]: I0213 15:44:05.525535    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d086d491-1034-4eac-b9c1-32c930fb005f-tigera-ca-bundle\") pod \"calico-typha-7584554fbb-ln4wz\" (UID: \"d086d491-1034-4eac-b9c1-32c930fb005f\") " pod="calico-system/calico-typha-7584554fbb-ln4wz"
Feb 13 15:44:05.696254 kubelet[2396]: E0213 15:44:05.696114    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:05.823762 containerd[1896]: time="2025-02-13T15:44:05.823292985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7584554fbb-ln4wz,Uid:d086d491-1034-4eac-b9c1-32c930fb005f,Namespace:calico-system,Attempt:0,}"
Feb 13 15:44:05.927438 containerd[1896]: time="2025-02-13T15:44:05.927138483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:44:05.927438 containerd[1896]: time="2025-02-13T15:44:05.927207985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:44:05.927438 containerd[1896]: time="2025-02-13T15:44:05.927227187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:05.927669 containerd[1896]: time="2025-02-13T15:44:05.927492349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:05.988064 kubelet[2396]: I0213 15:44:05.986986    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b"
Feb 13 15:44:05.992362 containerd[1896]: time="2025-02-13T15:44:05.989635462Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:05.992362 containerd[1896]: time="2025-02-13T15:44:05.989893081Z" level=info msg="Ensure that sandbox 6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b in task-service has been cleanup successfully"
Feb 13 15:44:05.992362 containerd[1896]: time="2025-02-13T15:44:05.990909681Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:05.992362 containerd[1896]: time="2025-02-13T15:44:05.990932676Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:05.992362 containerd[1896]: time="2025-02-13T15:44:05.992252895Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:05.992362 containerd[1896]: time="2025-02-13T15:44:05.992351643Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:05.990146 systemd[1]: Started cri-containerd-e7d6233b23ad1ab464ea2a1b2746688bea7f86d423a4d4d4adf641c417d0895a.scope - libcontainer container e7d6233b23ad1ab464ea2a1b2746688bea7f86d423a4d4d4adf641c417d0895a.
Feb 13 15:44:05.993446 containerd[1896]: time="2025-02-13T15:44:05.992374678Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:05.997825 containerd[1896]: time="2025-02-13T15:44:05.996309289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:2,}"
Feb 13 15:44:05.997124 systemd[1]: run-netns-cni\x2ddf72a9bf\x2d09bf\x2d11c2\x2d25ca\x2d1718c748324a.mount: Deactivated successfully.
Feb 13 15:44:06.216488 containerd[1896]: time="2025-02-13T15:44:06.215673324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7584554fbb-ln4wz,Uid:d086d491-1034-4eac-b9c1-32c930fb005f,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7d6233b23ad1ab464ea2a1b2746688bea7f86d423a4d4d4adf641c417d0895a\""
Feb 13 15:44:06.267359 containerd[1896]: time="2025-02-13T15:44:06.267279749Z" level=error msg="Failed to destroy network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:06.272825 containerd[1896]: time="2025-02-13T15:44:06.269709725Z" level=error msg="encountered an error cleaning up failed sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:06.272825 containerd[1896]: time="2025-02-13T15:44:06.271025408Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:06.273134 kubelet[2396]: E0213 15:44:06.271273    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:06.273134 kubelet[2396]: E0213 15:44:06.271336    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:06.273134 kubelet[2396]: E0213 15:44:06.271363    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:06.273293 kubelet[2396]: E0213 15:44:06.271420    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:06.273328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34-shm.mount: Deactivated successfully.
Feb 13 15:44:06.697355 kubelet[2396]: E0213 15:44:06.696866    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:06.993985 kubelet[2396]: I0213 15:44:06.993848    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34"
Feb 13 15:44:06.996290 containerd[1896]: time="2025-02-13T15:44:06.996252943Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:07.000060 containerd[1896]: time="2025-02-13T15:44:06.999188063Z" level=info msg="Ensure that sandbox 35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34 in task-service has been cleanup successfully"
Feb 13 15:44:07.003035 containerd[1896]: time="2025-02-13T15:44:07.003001364Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:07.003166 containerd[1896]: time="2025-02-13T15:44:07.003037778Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:07.005073 containerd[1896]: time="2025-02-13T15:44:07.004978604Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:07.005232 systemd[1]: run-netns-cni\x2d32edcd0b\x2d4619\x2d0f3d\x2d6eb8\x2d2dd3b5ca6d95.mount: Deactivated successfully.
Feb 13 15:44:07.008066 containerd[1896]: time="2025-02-13T15:44:07.007010849Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:07.008066 containerd[1896]: time="2025-02-13T15:44:07.007146735Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:07.010407 containerd[1896]: time="2025-02-13T15:44:07.010373958Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:07.010606 containerd[1896]: time="2025-02-13T15:44:07.010586844Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:07.010764 containerd[1896]: time="2025-02-13T15:44:07.010744724Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:07.011510 containerd[1896]: time="2025-02-13T15:44:07.011469142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:3,}"
Feb 13 15:44:07.250629 containerd[1896]: time="2025-02-13T15:44:07.250417107Z" level=error msg="Failed to destroy network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:07.254679 containerd[1896]: time="2025-02-13T15:44:07.253730507Z" level=error msg="encountered an error cleaning up failed sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:07.254382 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297-shm.mount: Deactivated successfully.
Feb 13 15:44:07.258216 containerd[1896]: time="2025-02-13T15:44:07.258052191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:07.258506 kubelet[2396]: E0213 15:44:07.258469    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:07.258594 kubelet[2396]: E0213 15:44:07.258533    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:07.258594 kubelet[2396]: E0213 15:44:07.258563    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:07.258849 kubelet[2396]: E0213 15:44:07.258632    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:07.698069 kubelet[2396]: E0213 15:44:07.697677    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:08.008632 kubelet[2396]: I0213 15:44:08.008428    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297"
Feb 13 15:44:08.010505 containerd[1896]: time="2025-02-13T15:44:08.010341472Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:08.011449 containerd[1896]: time="2025-02-13T15:44:08.010653304Z" level=info msg="Ensure that sandbox b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297 in task-service has been cleanup successfully"
Feb 13 15:44:08.013530 systemd[1]: run-netns-cni\x2de41d5a4e\x2ddd5e\x2d08ea\x2d0b10\x2d0f80a00d1bdc.mount: Deactivated successfully.
Feb 13 15:44:08.014087 containerd[1896]: time="2025-02-13T15:44:08.013746185Z" level=info msg="TearDown network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" successfully"
Feb 13 15:44:08.014087 containerd[1896]: time="2025-02-13T15:44:08.013777043Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" returns successfully"
Feb 13 15:44:08.016214 containerd[1896]: time="2025-02-13T15:44:08.015261638Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:08.016214 containerd[1896]: time="2025-02-13T15:44:08.015371736Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:08.016214 containerd[1896]: time="2025-02-13T15:44:08.015387654Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:08.016214 containerd[1896]: time="2025-02-13T15:44:08.016190529Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:08.016456 containerd[1896]: time="2025-02-13T15:44:08.016284123Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:08.016456 containerd[1896]: time="2025-02-13T15:44:08.016301315Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:08.017498 containerd[1896]: time="2025-02-13T15:44:08.017470988Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:08.017738 containerd[1896]: time="2025-02-13T15:44:08.017718832Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:08.017822 containerd[1896]: time="2025-02-13T15:44:08.017738476Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:08.019896 containerd[1896]: time="2025-02-13T15:44:08.019843627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:4,}"
Feb 13 15:44:08.232811 containerd[1896]: time="2025-02-13T15:44:08.232494288Z" level=error msg="Failed to destroy network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:08.234405 containerd[1896]: time="2025-02-13T15:44:08.234356060Z" level=error msg="encountered an error cleaning up failed sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:08.234557 containerd[1896]: time="2025-02-13T15:44:08.234450024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:08.236546 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188-shm.mount: Deactivated successfully.
Feb 13 15:44:08.237093 kubelet[2396]: E0213 15:44:08.236938    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:08.237093 kubelet[2396]: E0213 15:44:08.237011    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:08.237093 kubelet[2396]: E0213 15:44:08.237042    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:08.238920 kubelet[2396]: E0213 15:44:08.237862    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:08.335208 systemd[1]: Created slice kubepods-besteffort-podf00d591e_7798_4af4_9c36_f83590ed4ecd.slice - libcontainer container kubepods-besteffort-podf00d591e_7798_4af4_9c36_f83590ed4ecd.slice.
Feb 13 15:44:08.366751 kubelet[2396]: I0213 15:44:08.365980    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhd72\" (UniqueName: \"kubernetes.io/projected/f00d591e-7798-4af4-9c36-f83590ed4ecd-kube-api-access-fhd72\") pod \"nginx-deployment-8587fbcb89-9dw5w\" (UID: \"f00d591e-7798-4af4-9c36-f83590ed4ecd\") " pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:08.642962 containerd[1896]: time="2025-02-13T15:44:08.642842128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:0,}"
Feb 13 15:44:08.667073 kubelet[2396]: E0213 15:44:08.665744    2396 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:08.697847 kubelet[2396]: E0213 15:44:08.697804    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:08.817386 containerd[1896]: time="2025-02-13T15:44:08.817344177Z" level=error msg="Failed to destroy network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:08.817782 containerd[1896]: time="2025-02-13T15:44:08.817759695Z" level=error msg="encountered an error cleaning up failed sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:08.818538 containerd[1896]: time="2025-02-13T15:44:08.818512183Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:08.819193 kubelet[2396]: E0213 15:44:08.818815    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:08.819193 kubelet[2396]: E0213 15:44:08.818880    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:08.819193 kubelet[2396]: E0213 15:44:08.818911    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:08.819383 kubelet[2396]: E0213 15:44:08.818961    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9dw5w" podUID="f00d591e-7798-4af4-9c36-f83590ed4ecd"
Feb 13 15:44:08.974547 kubelet[2396]: I0213 15:44:08.970912    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0f5901-a2ec-434f-bd84-4e9bf59ff236-tigera-ca-bundle\") pod \"calico-kube-controllers-789ffc5c9d-vhwwt\" (UID: \"8e0f5901-a2ec-434f-bd84-4e9bf59ff236\") " pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:08.974547 kubelet[2396]: I0213 15:44:08.971090    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw2gk\" (UniqueName: \"kubernetes.io/projected/8e0f5901-a2ec-434f-bd84-4e9bf59ff236-kube-api-access-dw2gk\") pod \"calico-kube-controllers-789ffc5c9d-vhwwt\" (UID: \"8e0f5901-a2ec-434f-bd84-4e9bf59ff236\") " pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:08.991574 systemd[1]: Created slice kubepods-besteffort-pod8e0f5901_a2ec_434f_bd84_4e9bf59ff236.slice - libcontainer container kubepods-besteffort-pod8e0f5901_a2ec_434f_bd84_4e9bf59ff236.slice.
Feb 13 15:44:09.015836 kubelet[2396]: I0213 15:44:09.014100    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa"
Feb 13 15:44:09.015082 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa-shm.mount: Deactivated successfully.
Feb 13 15:44:09.020500 containerd[1896]: time="2025-02-13T15:44:09.019885330Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\""
Feb 13 15:44:09.020500 containerd[1896]: time="2025-02-13T15:44:09.020156584Z" level=info msg="Ensure that sandbox db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa in task-service has been cleanup successfully"
Feb 13 15:44:09.026826 containerd[1896]: time="2025-02-13T15:44:09.025423388Z" level=info msg="TearDown network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" successfully"
Feb 13 15:44:09.026826 containerd[1896]: time="2025-02-13T15:44:09.025472954Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" returns successfully"
Feb 13 15:44:09.026485 systemd[1]: run-netns-cni\x2d65acc9e7\x2de970\x2dc9b2\x2dbcac\x2d6ec0360c5ffc.mount: Deactivated successfully.
Feb 13 15:44:09.029892 containerd[1896]: time="2025-02-13T15:44:09.029568035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:1,}"
Feb 13 15:44:09.031386 kubelet[2396]: I0213 15:44:09.031356    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188"
Feb 13 15:44:09.032338 containerd[1896]: time="2025-02-13T15:44:09.032182152Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\""
Feb 13 15:44:09.033371 containerd[1896]: time="2025-02-13T15:44:09.033240989Z" level=info msg="Ensure that sandbox 4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188 in task-service has been cleanup successfully"
Feb 13 15:44:09.034611 containerd[1896]: time="2025-02-13T15:44:09.034510921Z" level=info msg="TearDown network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" successfully"
Feb 13 15:44:09.034611 containerd[1896]: time="2025-02-13T15:44:09.034535134Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" returns successfully"
Feb 13 15:44:09.039444 containerd[1896]: time="2025-02-13T15:44:09.039204999Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:09.039444 containerd[1896]: time="2025-02-13T15:44:09.039327397Z" level=info msg="TearDown network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" successfully"
Feb 13 15:44:09.039444 containerd[1896]: time="2025-02-13T15:44:09.039343469Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" returns successfully"
Feb 13 15:44:09.039581 systemd[1]: run-netns-cni\x2d8ae67b67\x2d2090\x2d08d0\x2d06ce\x2d19b579a17a30.mount: Deactivated successfully.
Feb 13 15:44:09.040475 containerd[1896]: time="2025-02-13T15:44:09.040129964Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:09.040475 containerd[1896]: time="2025-02-13T15:44:09.040228135Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:09.040475 containerd[1896]: time="2025-02-13T15:44:09.040243310Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:09.041200 containerd[1896]: time="2025-02-13T15:44:09.040703558Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:09.041668 containerd[1896]: time="2025-02-13T15:44:09.040788761Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:09.041668 containerd[1896]: time="2025-02-13T15:44:09.041619593Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:09.043736 containerd[1896]: time="2025-02-13T15:44:09.043326297Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:09.043736 containerd[1896]: time="2025-02-13T15:44:09.043441405Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:09.043736 containerd[1896]: time="2025-02-13T15:44:09.043457866Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:09.046199 containerd[1896]: time="2025-02-13T15:44:09.044910763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:5,}"
Feb 13 15:44:09.298475 containerd[1896]: time="2025-02-13T15:44:09.298360470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:0,}"
Feb 13 15:44:09.359216 containerd[1896]: time="2025-02-13T15:44:09.359082016Z" level=error msg="Failed to destroy network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.364363 containerd[1896]: time="2025-02-13T15:44:09.364156557Z" level=error msg="encountered an error cleaning up failed sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.364363 containerd[1896]: time="2025-02-13T15:44:09.364254113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.365301 kubelet[2396]: E0213 15:44:09.364736    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.365301 kubelet[2396]: E0213 15:44:09.364811    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:09.365301 kubelet[2396]: E0213 15:44:09.364841    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:09.365876 kubelet[2396]: E0213 15:44:09.364901    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:09.379529 containerd[1896]: time="2025-02-13T15:44:09.378949260Z" level=error msg="Failed to destroy network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.381055 containerd[1896]: time="2025-02-13T15:44:09.380321757Z" level=error msg="encountered an error cleaning up failed sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.381055 containerd[1896]: time="2025-02-13T15:44:09.380406400Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.381598 kubelet[2396]: E0213 15:44:09.380640    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.381598 kubelet[2396]: E0213 15:44:09.380709    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:09.381598 kubelet[2396]: E0213 15:44:09.380735    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:09.381755 kubelet[2396]: E0213 15:44:09.380783    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9dw5w" podUID="f00d591e-7798-4af4-9c36-f83590ed4ecd"
Feb 13 15:44:09.511980 containerd[1896]: time="2025-02-13T15:44:09.511228488Z" level=error msg="Failed to destroy network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.511980 containerd[1896]: time="2025-02-13T15:44:09.511588752Z" level=error msg="encountered an error cleaning up failed sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.511980 containerd[1896]: time="2025-02-13T15:44:09.511666900Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.512831 kubelet[2396]: E0213 15:44:09.512394    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:09.512831 kubelet[2396]: E0213 15:44:09.512465    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:09.512831 kubelet[2396]: E0213 15:44:09.512492    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:09.513035 kubelet[2396]: E0213 15:44:09.512552    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-789ffc5c9d-vhwwt_calico-system(8e0f5901-a2ec-434f-bd84-4e9bf59ff236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-789ffc5c9d-vhwwt_calico-system(8e0f5901-a2ec-434f-bd84-4e9bf59ff236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt" podUID="8e0f5901-a2ec-434f-bd84-4e9bf59ff236"
Feb 13 15:44:09.699083 kubelet[2396]: E0213 15:44:09.698842    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:10.019508 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329-shm.mount: Deactivated successfully.
Feb 13 15:44:10.041995 kubelet[2396]: I0213 15:44:10.041523    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689"
Feb 13 15:44:10.043753 containerd[1896]: time="2025-02-13T15:44:10.043487715Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\""
Feb 13 15:44:10.043753 containerd[1896]: time="2025-02-13T15:44:10.043735857Z" level=info msg="Ensure that sandbox 9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689 in task-service has been cleanup successfully"
Feb 13 15:44:10.045198 kubelet[2396]: I0213 15:44:10.045005    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329"
Feb 13 15:44:10.047826 containerd[1896]: time="2025-02-13T15:44:10.046873442Z" level=info msg="TearDown network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" successfully"
Feb 13 15:44:10.047826 containerd[1896]: time="2025-02-13T15:44:10.046906169Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" returns successfully"
Feb 13 15:44:10.047826 containerd[1896]: time="2025-02-13T15:44:10.047212093Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\""
Feb 13 15:44:10.047826 containerd[1896]: time="2025-02-13T15:44:10.047310425Z" level=info msg="TearDown network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" successfully"
Feb 13 15:44:10.047826 containerd[1896]: time="2025-02-13T15:44:10.047325054Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" returns successfully"
Feb 13 15:44:10.050109 systemd[1]: run-netns-cni\x2da1eddeda\x2dd39e\x2d8a2f\x2d1fdb\x2d88c81a001673.mount: Deactivated successfully.
Feb 13 15:44:10.051889 containerd[1896]: time="2025-02-13T15:44:10.051225731Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\""
Feb 13 15:44:10.051889 containerd[1896]: time="2025-02-13T15:44:10.051495355Z" level=info msg="Ensure that sandbox 55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329 in task-service has been cleanup successfully"
Feb 13 15:44:10.055100 containerd[1896]: time="2025-02-13T15:44:10.052591417Z" level=info msg="TearDown network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" successfully"
Feb 13 15:44:10.055100 containerd[1896]: time="2025-02-13T15:44:10.052620526Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" returns successfully"
Feb 13 15:44:10.056493 containerd[1896]: time="2025-02-13T15:44:10.055279537Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:10.056493 containerd[1896]: time="2025-02-13T15:44:10.055378263Z" level=info msg="TearDown network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" successfully"
Feb 13 15:44:10.056493 containerd[1896]: time="2025-02-13T15:44:10.055392905Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" returns successfully"
Feb 13 15:44:10.056679 kubelet[2396]: I0213 15:44:10.055400    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db"
Feb 13 15:44:10.056732 containerd[1896]: time="2025-02-13T15:44:10.056616234Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\""
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.056881844Z" level=info msg="Ensure that sandbox 1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db in task-service has been cleanup successfully"
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.057056926Z" level=info msg="TearDown network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" successfully"
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.057076466Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" returns successfully"
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.057163208Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\""
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.057238533Z" level=info msg="TearDown network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" successfully"
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.057250926Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" returns successfully"
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.058996596Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.059090318Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:10.059611 containerd[1896]: time="2025-02-13T15:44:10.059104194Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:10.057421 systemd[1]: run-netns-cni\x2dd2f2240c\x2d5fdb\x2d4c73\x2dc31a\x2d33bd9ef2eda2.mount: Deactivated successfully.
Feb 13 15:44:10.064894 systemd[1]: run-netns-cni\x2da220a39b\x2d3393\x2dfb6d\x2d46e8\x2d39efc4200e1d.mount: Deactivated successfully.
Feb 13 15:44:10.066976 containerd[1896]: time="2025-02-13T15:44:10.065922279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:1,}"
Feb 13 15:44:10.068642 containerd[1896]: time="2025-02-13T15:44:10.068513075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:2,}"
Feb 13 15:44:10.069934 containerd[1896]: time="2025-02-13T15:44:10.069611053Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:10.072541 containerd[1896]: time="2025-02-13T15:44:10.072208223Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:10.072541 containerd[1896]: time="2025-02-13T15:44:10.072361880Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:10.073647 containerd[1896]: time="2025-02-13T15:44:10.073619678Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:10.074188 containerd[1896]: time="2025-02-13T15:44:10.074138492Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:10.074188 containerd[1896]: time="2025-02-13T15:44:10.074159035Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:10.075567 containerd[1896]: time="2025-02-13T15:44:10.075539508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:6,}"
Feb 13 15:44:10.325000 containerd[1896]: time="2025-02-13T15:44:10.324241703Z" level=error msg="Failed to destroy network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.333660 containerd[1896]: time="2025-02-13T15:44:10.333477012Z" level=error msg="encountered an error cleaning up failed sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.334207 containerd[1896]: time="2025-02-13T15:44:10.333769580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.335319 kubelet[2396]: E0213 15:44:10.334577    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.335319 kubelet[2396]: E0213 15:44:10.334647    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:10.335319 kubelet[2396]: E0213 15:44:10.334674    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:10.335493 kubelet[2396]: E0213 15:44:10.334725    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-789ffc5c9d-vhwwt_calico-system(8e0f5901-a2ec-434f-bd84-4e9bf59ff236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-789ffc5c9d-vhwwt_calico-system(8e0f5901-a2ec-434f-bd84-4e9bf59ff236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt" podUID="8e0f5901-a2ec-434f-bd84-4e9bf59ff236"
Feb 13 15:44:10.391365 containerd[1896]: time="2025-02-13T15:44:10.390531425Z" level=error msg="Failed to destroy network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.393639 containerd[1896]: time="2025-02-13T15:44:10.391636516Z" level=error msg="encountered an error cleaning up failed sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.394252 containerd[1896]: time="2025-02-13T15:44:10.393951991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.395090 kubelet[2396]: E0213 15:44:10.394390    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.395090 kubelet[2396]: E0213 15:44:10.394456    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:10.395090 kubelet[2396]: E0213 15:44:10.394488    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:10.395415 kubelet[2396]: E0213 15:44:10.394622    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:10.430464 containerd[1896]: time="2025-02-13T15:44:10.430091952Z" level=error msg="Failed to destroy network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.431110 containerd[1896]: time="2025-02-13T15:44:10.431070009Z" level=error msg="encountered an error cleaning up failed sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.431242 containerd[1896]: time="2025-02-13T15:44:10.431153725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.431839 kubelet[2396]: E0213 15:44:10.431600    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:10.431839 kubelet[2396]: E0213 15:44:10.431668    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:10.431839 kubelet[2396]: E0213 15:44:10.431700    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:10.432220 kubelet[2396]: E0213 15:44:10.431759    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9dw5w" podUID="f00d591e-7798-4af4-9c36-f83590ed4ecd"
Feb 13 15:44:10.700896 kubelet[2396]: E0213 15:44:10.699589    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:11.017167 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc-shm.mount: Deactivated successfully.
Feb 13 15:44:11.061991 kubelet[2396]: I0213 15:44:11.061020    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc"
Feb 13 15:44:11.062485 containerd[1896]: time="2025-02-13T15:44:11.062398006Z" level=info msg="StopPodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\""
Feb 13 15:44:11.063008 containerd[1896]: time="2025-02-13T15:44:11.062895825Z" level=info msg="Ensure that sandbox 656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc in task-service has been cleanup successfully"
Feb 13 15:44:11.065856 containerd[1896]: time="2025-02-13T15:44:11.065542796Z" level=info msg="TearDown network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" successfully"
Feb 13 15:44:11.065856 containerd[1896]: time="2025-02-13T15:44:11.065574506Z" level=info msg="StopPodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" returns successfully"
Feb 13 15:44:11.066966 containerd[1896]: time="2025-02-13T15:44:11.066936395Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\""
Feb 13 15:44:11.067071 containerd[1896]: time="2025-02-13T15:44:11.067043535Z" level=info msg="TearDown network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" successfully"
Feb 13 15:44:11.067071 containerd[1896]: time="2025-02-13T15:44:11.067059544Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" returns successfully"
Feb 13 15:44:11.068355 kubelet[2396]: I0213 15:44:11.067403    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611"
Feb 13 15:44:11.068118 systemd[1]: run-netns-cni\x2d789c34e1\x2da6d8\x2d67d8\x2dbda1\x2d712f8191de90.mount: Deactivated successfully.
Feb 13 15:44:11.069877 containerd[1896]: time="2025-02-13T15:44:11.069839043Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\""
Feb 13 15:44:11.069978 containerd[1896]: time="2025-02-13T15:44:11.069953305Z" level=info msg="TearDown network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" successfully"
Feb 13 15:44:11.069978 containerd[1896]: time="2025-02-13T15:44:11.069969557Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" returns successfully"
Feb 13 15:44:11.070247 containerd[1896]: time="2025-02-13T15:44:11.070173392Z" level=info msg="StopPodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\""
Feb 13 15:44:11.070437 containerd[1896]: time="2025-02-13T15:44:11.070411614Z" level=info msg="Ensure that sandbox 88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611 in task-service has been cleanup successfully"
Feb 13 15:44:11.073711 systemd[1]: run-netns-cni\x2da482ed2b\x2d87d5\x2d287b\x2d63de\x2d32cab280cb09.mount: Deactivated successfully.
Feb 13 15:44:11.074616 containerd[1896]: time="2025-02-13T15:44:11.074265136Z" level=info msg="TearDown network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" successfully"
Feb 13 15:44:11.074616 containerd[1896]: time="2025-02-13T15:44:11.074304746Z" level=info msg="StopPodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" returns successfully"
Feb 13 15:44:11.075622 containerd[1896]: time="2025-02-13T15:44:11.075473094Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\""
Feb 13 15:44:11.075622 containerd[1896]: time="2025-02-13T15:44:11.075594822Z" level=info msg="TearDown network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" successfully"
Feb 13 15:44:11.075622 containerd[1896]: time="2025-02-13T15:44:11.075611608Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" returns successfully"
Feb 13 15:44:11.075929 containerd[1896]: time="2025-02-13T15:44:11.075883241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:3,}"
Feb 13 15:44:11.077675 containerd[1896]: time="2025-02-13T15:44:11.077640945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:2,}"
Feb 13 15:44:11.085829 kubelet[2396]: I0213 15:44:11.085208    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300"
Feb 13 15:44:11.088029 containerd[1896]: time="2025-02-13T15:44:11.087989552Z" level=info msg="StopPodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\""
Feb 13 15:44:11.093577 containerd[1896]: time="2025-02-13T15:44:11.088365715Z" level=info msg="Ensure that sandbox 47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300 in task-service has been cleanup successfully"
Feb 13 15:44:11.093577 containerd[1896]: time="2025-02-13T15:44:11.090883285Z" level=info msg="TearDown network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" successfully"
Feb 13 15:44:11.093577 containerd[1896]: time="2025-02-13T15:44:11.090916619Z" level=info msg="StopPodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" returns successfully"
Feb 13 15:44:11.093577 containerd[1896]: time="2025-02-13T15:44:11.092546643Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\""
Feb 13 15:44:11.093577 containerd[1896]: time="2025-02-13T15:44:11.092678208Z" level=info msg="TearDown network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" successfully"
Feb 13 15:44:11.093577 containerd[1896]: time="2025-02-13T15:44:11.092695008Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" returns successfully"
Feb 13 15:44:11.092576 systemd[1]: run-netns-cni\x2db9698e3d\x2d9cee\x2d16cd\x2d8597\x2d11740a61dfc0.mount: Deactivated successfully.
Feb 13 15:44:11.096397 containerd[1896]: time="2025-02-13T15:44:11.096282176Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\""
Feb 13 15:44:11.096536 containerd[1896]: time="2025-02-13T15:44:11.096416442Z" level=info msg="TearDown network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" successfully"
Feb 13 15:44:11.096536 containerd[1896]: time="2025-02-13T15:44:11.096442584Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" returns successfully"
Feb 13 15:44:11.098864 containerd[1896]: time="2025-02-13T15:44:11.098818822Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:11.099352 containerd[1896]: time="2025-02-13T15:44:11.099314770Z" level=info msg="TearDown network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" successfully"
Feb 13 15:44:11.099652 containerd[1896]: time="2025-02-13T15:44:11.099629240Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" returns successfully"
Feb 13 15:44:11.100985 containerd[1896]: time="2025-02-13T15:44:11.100943814Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:11.101084 containerd[1896]: time="2025-02-13T15:44:11.101068575Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:11.101125 containerd[1896]: time="2025-02-13T15:44:11.101098032Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:11.103128 containerd[1896]: time="2025-02-13T15:44:11.102894226Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:11.103128 containerd[1896]: time="2025-02-13T15:44:11.102999343Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:11.103128 containerd[1896]: time="2025-02-13T15:44:11.103025348Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:11.104726 containerd[1896]: time="2025-02-13T15:44:11.104691745Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:11.105439 containerd[1896]: time="2025-02-13T15:44:11.105335074Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:11.105439 containerd[1896]: time="2025-02-13T15:44:11.105368603Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:11.109868 containerd[1896]: time="2025-02-13T15:44:11.107080750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:7,}"
Feb 13 15:44:11.321946 containerd[1896]: time="2025-02-13T15:44:11.321790153Z" level=error msg="Failed to destroy network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.325592 containerd[1896]: time="2025-02-13T15:44:11.325444112Z" level=error msg="encountered an error cleaning up failed sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.326142 containerd[1896]: time="2025-02-13T15:44:11.325555435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.326689 kubelet[2396]: E0213 15:44:11.326372    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.326689 kubelet[2396]: E0213 15:44:11.326438    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:11.326689 kubelet[2396]: E0213 15:44:11.326467    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:11.327477 containerd[1896]: time="2025-02-13T15:44:11.327443955Z" level=error msg="Failed to destroy network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.327682 kubelet[2396]: E0213 15:44:11.327606    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-789ffc5c9d-vhwwt_calico-system(8e0f5901-a2ec-434f-bd84-4e9bf59ff236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-789ffc5c9d-vhwwt_calico-system(8e0f5901-a2ec-434f-bd84-4e9bf59ff236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt" podUID="8e0f5901-a2ec-434f-bd84-4e9bf59ff236"
Feb 13 15:44:11.328539 containerd[1896]: time="2025-02-13T15:44:11.328429682Z" level=error msg="encountered an error cleaning up failed sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.328539 containerd[1896]: time="2025-02-13T15:44:11.328510189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.329021 kubelet[2396]: E0213 15:44:11.328738    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.329021 kubelet[2396]: E0213 15:44:11.328775    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:11.329021 kubelet[2396]: E0213 15:44:11.328851    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:11.329194 kubelet[2396]: E0213 15:44:11.328929    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:11.336831 containerd[1896]: time="2025-02-13T15:44:11.336774458Z" level=error msg="Failed to destroy network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.337822 containerd[1896]: time="2025-02-13T15:44:11.337331845Z" level=error msg="encountered an error cleaning up failed sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.337822 containerd[1896]: time="2025-02-13T15:44:11.337413717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.338029 kubelet[2396]: E0213 15:44:11.337705    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:11.338332 kubelet[2396]: E0213 15:44:11.337774    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:11.338332 kubelet[2396]: E0213 15:44:11.338138    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:11.338332 kubelet[2396]: E0213 15:44:11.338217    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9dw5w" podUID="f00d591e-7798-4af4-9c36-f83590ed4ecd"
Feb 13 15:44:11.700480 kubelet[2396]: E0213 15:44:11.699968    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:12.017763 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db-shm.mount: Deactivated successfully.
Feb 13 15:44:12.128406 kubelet[2396]: I0213 15:44:12.127739    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44"
Feb 13 15:44:12.131147 containerd[1896]: time="2025-02-13T15:44:12.130515984Z" level=info msg="StopPodSandbox for \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\""
Feb 13 15:44:12.131147 containerd[1896]: time="2025-02-13T15:44:12.130848942Z" level=info msg="Ensure that sandbox 6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44 in task-service has been cleanup successfully"
Feb 13 15:44:12.131147 containerd[1896]: time="2025-02-13T15:44:12.131026798Z" level=info msg="TearDown network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\" successfully"
Feb 13 15:44:12.131147 containerd[1896]: time="2025-02-13T15:44:12.131046480Z" level=info msg="StopPodSandbox for \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\" returns successfully"
Feb 13 15:44:12.131831 containerd[1896]: time="2025-02-13T15:44:12.131655715Z" level=info msg="StopPodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\""
Feb 13 15:44:12.131831 containerd[1896]: time="2025-02-13T15:44:12.131745810Z" level=info msg="TearDown network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" successfully"
Feb 13 15:44:12.131831 containerd[1896]: time="2025-02-13T15:44:12.131759819Z" level=info msg="StopPodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" returns successfully"
Feb 13 15:44:12.141303 containerd[1896]: time="2025-02-13T15:44:12.140756970Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\""
Feb 13 15:44:12.141303 containerd[1896]: time="2025-02-13T15:44:12.140971393Z" level=info msg="TearDown network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" successfully"
Feb 13 15:44:12.141303 containerd[1896]: time="2025-02-13T15:44:12.140991310Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" returns successfully"
Feb 13 15:44:12.146708 containerd[1896]: time="2025-02-13T15:44:12.146653051Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\""
Feb 13 15:44:12.147069 containerd[1896]: time="2025-02-13T15:44:12.147003421Z" level=info msg="TearDown network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" successfully"
Feb 13 15:44:12.147069 containerd[1896]: time="2025-02-13T15:44:12.147027371Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" returns successfully"
Feb 13 15:44:12.147428 kubelet[2396]: I0213 15:44:12.147404    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db"
Feb 13 15:44:12.150385 containerd[1896]: time="2025-02-13T15:44:12.150344377Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:12.150677 containerd[1896]: time="2025-02-13T15:44:12.150653640Z" level=info msg="TearDown network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" successfully"
Feb 13 15:44:12.151386 containerd[1896]: time="2025-02-13T15:44:12.150839343Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" returns successfully"
Feb 13 15:44:12.151134 systemd[1]: run-netns-cni\x2da4b2b156\x2d66c3\x2db13f\x2d230c\x2d47d81199eca3.mount: Deactivated successfully.
Feb 13 15:44:12.160595 containerd[1896]: time="2025-02-13T15:44:12.160020348Z" level=info msg="StopPodSandbox for \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\""
Feb 13 15:44:12.160595 containerd[1896]: time="2025-02-13T15:44:12.160442046Z" level=info msg="Ensure that sandbox d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db in task-service has been cleanup successfully"
Feb 13 15:44:12.160595 containerd[1896]: time="2025-02-13T15:44:12.160490332Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:12.163001 containerd[1896]: time="2025-02-13T15:44:12.160588699Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:12.163001 containerd[1896]: time="2025-02-13T15:44:12.160642219Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:12.163437 containerd[1896]: time="2025-02-13T15:44:12.163204261Z" level=info msg="TearDown network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\" successfully"
Feb 13 15:44:12.163437 containerd[1896]: time="2025-02-13T15:44:12.163233554Z" level=info msg="StopPodSandbox for \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\" returns successfully"
Feb 13 15:44:12.164861 systemd[1]: run-netns-cni\x2dbe4675a5\x2d2918\x2de0c8\x2d7359\x2dcb50491ec358.mount: Deactivated successfully.
Feb 13 15:44:12.167276 containerd[1896]: time="2025-02-13T15:44:12.167148204Z" level=info msg="StopPodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\""
Feb 13 15:44:12.167276 containerd[1896]: time="2025-02-13T15:44:12.167267572Z" level=info msg="TearDown network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" successfully"
Feb 13 15:44:12.167694 containerd[1896]: time="2025-02-13T15:44:12.167282571Z" level=info msg="StopPodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" returns successfully"
Feb 13 15:44:12.167694 containerd[1896]: time="2025-02-13T15:44:12.167406179Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:12.167694 containerd[1896]: time="2025-02-13T15:44:12.167500788Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:12.167694 containerd[1896]: time="2025-02-13T15:44:12.167516828Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:12.168372 containerd[1896]: time="2025-02-13T15:44:12.168333187Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:12.168459 containerd[1896]: time="2025-02-13T15:44:12.168429990Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:12.168459 containerd[1896]: time="2025-02-13T15:44:12.168445046Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:12.168775 containerd[1896]: time="2025-02-13T15:44:12.168608416Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\""
Feb 13 15:44:12.168775 containerd[1896]: time="2025-02-13T15:44:12.168691237Z" level=info msg="TearDown network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" successfully"
Feb 13 15:44:12.168775 containerd[1896]: time="2025-02-13T15:44:12.168705338Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" returns successfully"
Feb 13 15:44:12.169717 containerd[1896]: time="2025-02-13T15:44:12.169195948Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\""
Feb 13 15:44:12.169717 containerd[1896]: time="2025-02-13T15:44:12.169290584Z" level=info msg="TearDown network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" successfully"
Feb 13 15:44:12.169717 containerd[1896]: time="2025-02-13T15:44:12.169306210Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" returns successfully"
Feb 13 15:44:12.170173 containerd[1896]: time="2025-02-13T15:44:12.170148075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:8,}"
Feb 13 15:44:12.171932 containerd[1896]: time="2025-02-13T15:44:12.171859939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:4,}"
Feb 13 15:44:12.174298 kubelet[2396]: I0213 15:44:12.174240    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270"
Feb 13 15:44:12.177203 containerd[1896]: time="2025-02-13T15:44:12.176810236Z" level=info msg="StopPodSandbox for \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\""
Feb 13 15:44:12.177203 containerd[1896]: time="2025-02-13T15:44:12.177053121Z" level=info msg="Ensure that sandbox c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270 in task-service has been cleanup successfully"
Feb 13 15:44:12.177966 containerd[1896]: time="2025-02-13T15:44:12.177557535Z" level=info msg="TearDown network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\" successfully"
Feb 13 15:44:12.180443 containerd[1896]: time="2025-02-13T15:44:12.180413011Z" level=info msg="StopPodSandbox for \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\" returns successfully"
Feb 13 15:44:12.180718 systemd[1]: run-netns-cni\x2d8333aa8b\x2d0b0b\x2d4662\x2db79e\x2d74b6daad7f57.mount: Deactivated successfully.
Feb 13 15:44:12.183184 containerd[1896]: time="2025-02-13T15:44:12.182245117Z" level=info msg="StopPodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\""
Feb 13 15:44:12.183184 containerd[1896]: time="2025-02-13T15:44:12.182356213Z" level=info msg="TearDown network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" successfully"
Feb 13 15:44:12.183184 containerd[1896]: time="2025-02-13T15:44:12.182373242Z" level=info msg="StopPodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" returns successfully"
Feb 13 15:44:12.183526 containerd[1896]: time="2025-02-13T15:44:12.183502105Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\""
Feb 13 15:44:12.184124 containerd[1896]: time="2025-02-13T15:44:12.184100344Z" level=info msg="TearDown network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" successfully"
Feb 13 15:44:12.184236 containerd[1896]: time="2025-02-13T15:44:12.184215810Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" returns successfully"
Feb 13 15:44:12.186458 containerd[1896]: time="2025-02-13T15:44:12.185720520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:3,}"
Feb 13 15:44:12.379016 containerd[1896]: time="2025-02-13T15:44:12.377041524Z" level=error msg="Failed to destroy network for sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.380822 containerd[1896]: time="2025-02-13T15:44:12.380642929Z" level=error msg="encountered an error cleaning up failed sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.380822 containerd[1896]: time="2025-02-13T15:44:12.380727777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.381241 kubelet[2396]: E0213 15:44:12.381184    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.381347 kubelet[2396]: E0213 15:44:12.381261    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:12.381347 kubelet[2396]: E0213 15:44:12.381290    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt"
Feb 13 15:44:12.381835 kubelet[2396]: E0213 15:44:12.381764    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-789ffc5c9d-vhwwt_calico-system(8e0f5901-a2ec-434f-bd84-4e9bf59ff236)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-789ffc5c9d-vhwwt_calico-system(8e0f5901-a2ec-434f-bd84-4e9bf59ff236)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt" podUID="8e0f5901-a2ec-434f-bd84-4e9bf59ff236"
Feb 13 15:44:12.431961 containerd[1896]: time="2025-02-13T15:44:12.431910422Z" level=error msg="Failed to destroy network for sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.432596 containerd[1896]: time="2025-02-13T15:44:12.432410457Z" level=error msg="encountered an error cleaning up failed sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.432596 containerd[1896]: time="2025-02-13T15:44:12.432488923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.433077 kubelet[2396]: E0213 15:44:12.432920    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.433077 kubelet[2396]: E0213 15:44:12.432989    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:12.433077 kubelet[2396]: E0213 15:44:12.433018    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wpjns"
Feb 13 15:44:12.433276 kubelet[2396]: E0213 15:44:12.433092    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wpjns_calico-system(0c126d30-b7f3-49c7-adb0-7d602f0e81f4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wpjns" podUID="0c126d30-b7f3-49c7-adb0-7d602f0e81f4"
Feb 13 15:44:12.436452 containerd[1896]: time="2025-02-13T15:44:12.435943471Z" level=error msg="Failed to destroy network for sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.436452 containerd[1896]: time="2025-02-13T15:44:12.436265347Z" level=error msg="encountered an error cleaning up failed sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.436452 containerd[1896]: time="2025-02-13T15:44:12.436339398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.437118 kubelet[2396]: E0213 15:44:12.436898    2396 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 15:44:12.437118 kubelet[2396]: E0213 15:44:12.436965    2396 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:12.437118 kubelet[2396]: E0213 15:44:12.436994    2396 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-9dw5w"
Feb 13 15:44:12.437323 kubelet[2396]: E0213 15:44:12.437063    2396 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-9dw5w_default(f00d591e-7798-4af4-9c36-f83590ed4ecd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-9dw5w" podUID="f00d591e-7798-4af4-9c36-f83590ed4ecd"
Feb 13 15:44:12.651216 containerd[1896]: time="2025-02-13T15:44:12.651078626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:12.652201 containerd[1896]: time="2025-02-13T15:44:12.652153667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010"
Feb 13 15:44:12.654256 containerd[1896]: time="2025-02-13T15:44:12.653116026Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:12.655824 containerd[1896]: time="2025-02-13T15:44:12.655271273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:12.656089 containerd[1896]: time="2025-02-13T15:44:12.656060899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.709746722s"
Feb 13 15:44:12.656208 containerd[1896]: time="2025-02-13T15:44:12.656188855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\""
Feb 13 15:44:12.657306 containerd[1896]: time="2025-02-13T15:44:12.657283606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\""
Feb 13 15:44:12.665119 containerd[1896]: time="2025-02-13T15:44:12.665071394Z" level=info msg="CreateContainer within sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Feb 13 15:44:12.682609 containerd[1896]: time="2025-02-13T15:44:12.682563133Z" level=info msg="CreateContainer within sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\""
Feb 13 15:44:12.684841 containerd[1896]: time="2025-02-13T15:44:12.683152817Z" level=info msg="StartContainer for \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\""
Feb 13 15:44:12.701650 kubelet[2396]: E0213 15:44:12.700587    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:12.791041 systemd[1]: Started cri-containerd-3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453.scope - libcontainer container 3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453.
Feb 13 15:44:12.834207 containerd[1896]: time="2025-02-13T15:44:12.834162375Z" level=info msg="StartContainer for \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\" returns successfully"
Feb 13 15:44:12.938989 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Feb 13 15:44:12.939143 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Feb 13 15:44:13.042493 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a-shm.mount: Deactivated successfully.
Feb 13 15:44:13.044372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2591598988.mount: Deactivated successfully.
Feb 13 15:44:13.180650 kubelet[2396]: I0213 15:44:13.180614    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a"
Feb 13 15:44:13.185819 containerd[1896]: time="2025-02-13T15:44:13.183140378Z" level=info msg="StopPodSandbox for \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\""
Feb 13 15:44:13.185819 containerd[1896]: time="2025-02-13T15:44:13.183385470Z" level=info msg="Ensure that sandbox 9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a in task-service has been cleanup successfully"
Feb 13 15:44:13.186073 systemd[1]: run-netns-cni\x2d4d865fff\x2df7a2\x2d0140\x2d41da\x2d3732bef44274.mount: Deactivated successfully.
Feb 13 15:44:13.187634 containerd[1896]: time="2025-02-13T15:44:13.187490908Z" level=info msg="TearDown network for sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\" successfully"
Feb 13 15:44:13.187634 containerd[1896]: time="2025-02-13T15:44:13.187547809Z" level=info msg="StopPodSandbox for \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\" returns successfully"
Feb 13 15:44:13.190709 containerd[1896]: time="2025-02-13T15:44:13.190652655Z" level=info msg="StopPodSandbox for \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\""
Feb 13 15:44:13.191393 containerd[1896]: time="2025-02-13T15:44:13.190971498Z" level=info msg="TearDown network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\" successfully"
Feb 13 15:44:13.191393 containerd[1896]: time="2025-02-13T15:44:13.191239996Z" level=info msg="StopPodSandbox for \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\" returns successfully"
Feb 13 15:44:13.192661 containerd[1896]: time="2025-02-13T15:44:13.192193164Z" level=info msg="StopPodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\""
Feb 13 15:44:13.192661 containerd[1896]: time="2025-02-13T15:44:13.192291024Z" level=info msg="TearDown network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" successfully"
Feb 13 15:44:13.192661 containerd[1896]: time="2025-02-13T15:44:13.192305197Z" level=info msg="StopPodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" returns successfully"
Feb 13 15:44:13.195821 containerd[1896]: time="2025-02-13T15:44:13.195080171Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\""
Feb 13 15:44:13.195821 containerd[1896]: time="2025-02-13T15:44:13.195189386Z" level=info msg="TearDown network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" successfully"
Feb 13 15:44:13.195821 containerd[1896]: time="2025-02-13T15:44:13.195204093Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" returns successfully"
Feb 13 15:44:13.196097 containerd[1896]: time="2025-02-13T15:44:13.195977114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:4,}"
Feb 13 15:44:13.208747 kubelet[2396]: I0213 15:44:13.208247    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624"
Feb 13 15:44:13.210942 containerd[1896]: time="2025-02-13T15:44:13.210446356Z" level=info msg="StopPodSandbox for \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\""
Feb 13 15:44:13.210942 containerd[1896]: time="2025-02-13T15:44:13.210723809Z" level=info msg="Ensure that sandbox 2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624 in task-service has been cleanup successfully"
Feb 13 15:44:13.218542 containerd[1896]: time="2025-02-13T15:44:13.218494938Z" level=info msg="TearDown network for sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\" successfully"
Feb 13 15:44:13.218963 containerd[1896]: time="2025-02-13T15:44:13.218828304Z" level=info msg="StopPodSandbox for \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\" returns successfully"
Feb 13 15:44:13.218897 systemd[1]: run-netns-cni\x2df244f714\x2d2ba8\x2ddb17\x2d6d00\x2d240014ccf42f.mount: Deactivated successfully.
Feb 13 15:44:13.223356 containerd[1896]: time="2025-02-13T15:44:13.222966244Z" level=info msg="StopPodSandbox for \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\""
Feb 13 15:44:13.234833 containerd[1896]: time="2025-02-13T15:44:13.227950732Z" level=info msg="TearDown network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\" successfully"
Feb 13 15:44:13.234833 containerd[1896]: time="2025-02-13T15:44:13.227989566Z" level=info msg="StopPodSandbox for \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\" returns successfully"
Feb 13 15:44:13.235033 kubelet[2396]: I0213 15:44:13.232944    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-whtdb" podStartSLOduration=3.7523519800000003 podStartE2EDuration="24.232923145s" podCreationTimestamp="2025-02-13 15:43:49 +0000 UTC" firstStartedPulling="2025-02-13 15:43:52.1765118 +0000 UTC m=+4.258394077" lastFinishedPulling="2025-02-13 15:44:12.657082965 +0000 UTC m=+24.738965242" observedRunningTime="2025-02-13 15:44:13.232465821 +0000 UTC m=+25.314348150" watchObservedRunningTime="2025-02-13 15:44:13.232923145 +0000 UTC m=+25.314805443"
Feb 13 15:44:13.236803 containerd[1896]: time="2025-02-13T15:44:13.236750715Z" level=info msg="StopPodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\""
Feb 13 15:44:13.237072 containerd[1896]: time="2025-02-13T15:44:13.237049767Z" level=info msg="TearDown network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" successfully"
Feb 13 15:44:13.237198 containerd[1896]: time="2025-02-13T15:44:13.237178648Z" level=info msg="StopPodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" returns successfully"
Feb 13 15:44:13.238359 containerd[1896]: time="2025-02-13T15:44:13.238327992Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\""
Feb 13 15:44:13.241956 kubelet[2396]: I0213 15:44:13.241928    2396 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75"
Feb 13 15:44:13.245120 containerd[1896]: time="2025-02-13T15:44:13.244035640Z" level=info msg="StopPodSandbox for \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\""
Feb 13 15:44:13.245674 containerd[1896]: time="2025-02-13T15:44:13.245643234Z" level=info msg="Ensure that sandbox ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75 in task-service has been cleanup successfully"
Feb 13 15:44:13.246018 containerd[1896]: time="2025-02-13T15:44:13.245996657Z" level=info msg="TearDown network for sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\" successfully"
Feb 13 15:44:13.246223 containerd[1896]: time="2025-02-13T15:44:13.246204592Z" level=info msg="StopPodSandbox for \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\" returns successfully"
Feb 13 15:44:13.246854 containerd[1896]: time="2025-02-13T15:44:13.246829485Z" level=info msg="TearDown network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" successfully"
Feb 13 15:44:13.246995 containerd[1896]: time="2025-02-13T15:44:13.246977711Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" returns successfully"
Feb 13 15:44:13.248085 containerd[1896]: time="2025-02-13T15:44:13.247660584Z" level=info msg="StopPodSandbox for \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\""
Feb 13 15:44:13.248638 containerd[1896]: time="2025-02-13T15:44:13.248616047Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\""
Feb 13 15:44:13.248842 containerd[1896]: time="2025-02-13T15:44:13.248824242Z" level=info msg="TearDown network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" successfully"
Feb 13 15:44:13.249002 containerd[1896]: time="2025-02-13T15:44:13.248982906Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" returns successfully"
Feb 13 15:44:13.249983 containerd[1896]: time="2025-02-13T15:44:13.249945699Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:13.250332 containerd[1896]: time="2025-02-13T15:44:13.250308767Z" level=info msg="TearDown network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" successfully"
Feb 13 15:44:13.250645 containerd[1896]: time="2025-02-13T15:44:13.250565609Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" returns successfully"
Feb 13 15:44:13.250984 containerd[1896]: time="2025-02-13T15:44:13.250169360Z" level=info msg="TearDown network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\" successfully"
Feb 13 15:44:13.251496 containerd[1896]: time="2025-02-13T15:44:13.251476440Z" level=info msg="StopPodSandbox for \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\" returns successfully"
Feb 13 15:44:13.252815 containerd[1896]: time="2025-02-13T15:44:13.251422191Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:13.261576 containerd[1896]: time="2025-02-13T15:44:13.260410189Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:13.263167 containerd[1896]: time="2025-02-13T15:44:13.260314898Z" level=info msg="StopPodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\""
Feb 13 15:44:13.263167 containerd[1896]: time="2025-02-13T15:44:13.262998881Z" level=info msg="TearDown network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" successfully"
Feb 13 15:44:13.263167 containerd[1896]: time="2025-02-13T15:44:13.263016493Z" level=info msg="StopPodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" returns successfully"
Feb 13 15:44:13.263167 containerd[1896]: time="2025-02-13T15:44:13.263100894Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:13.264516 containerd[1896]: time="2025-02-13T15:44:13.264426832Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\""
Feb 13 15:44:13.264621 containerd[1896]: time="2025-02-13T15:44:13.264557552Z" level=info msg="TearDown network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" successfully"
Feb 13 15:44:13.264621 containerd[1896]: time="2025-02-13T15:44:13.264574109Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" returns successfully"
Feb 13 15:44:13.266578 containerd[1896]: time="2025-02-13T15:44:13.266229669Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:13.266578 containerd[1896]: time="2025-02-13T15:44:13.266363599Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:13.266578 containerd[1896]: time="2025-02-13T15:44:13.266382245Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:13.266578 containerd[1896]: time="2025-02-13T15:44:13.266245089Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\""
Feb 13 15:44:13.266578 containerd[1896]: time="2025-02-13T15:44:13.266519259Z" level=info msg="TearDown network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" successfully"
Feb 13 15:44:13.266578 containerd[1896]: time="2025-02-13T15:44:13.266532059Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" returns successfully"
Feb 13 15:44:13.268693 containerd[1896]: time="2025-02-13T15:44:13.268506380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:5,}"
Feb 13 15:44:13.273854 containerd[1896]: time="2025-02-13T15:44:13.272991395Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:13.273854 containerd[1896]: time="2025-02-13T15:44:13.273301867Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:13.273854 containerd[1896]: time="2025-02-13T15:44:13.273321989Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:13.276103 containerd[1896]: time="2025-02-13T15:44:13.276068159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:9,}"
Feb 13 15:44:13.488669 containerd[1896]: time="2025-02-13T15:44:13.488625359Z" level=info msg="StopContainer for \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\" with timeout 5 (s)"
Feb 13 15:44:13.489866 containerd[1896]: time="2025-02-13T15:44:13.489252970Z" level=info msg="Stop container \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\" with signal terminated"
Feb 13 15:44:13.701617 kubelet[2396]: E0213 15:44:13.701570    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:13.857114 (udev-worker)[3502]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:44:13.859155 systemd-networkd[1817]: cali374e489825b: Link UP
Feb 13 15:44:13.859416 systemd-networkd[1817]: cali374e489825b: Gained carrier
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.420 [INFO][3534] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.600 [INFO][3534] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0 calico-kube-controllers-789ffc5c9d- calico-system  8e0f5901-a2ec-434f-bd84-4e9bf59ff236 1261 0 2025-02-13 15:44:08 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:789ffc5c9d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  172.31.30.13  calico-kube-controllers-789ffc5c9d-vhwwt eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] cali374e489825b  [] []}} ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Namespace="calico-system" Pod="calico-kube-controllers-789ffc5c9d-vhwwt" WorkloadEndpoint="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.600 [INFO][3534] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Namespace="calico-system" Pod="calico-kube-controllers-789ffc5c9d-vhwwt" WorkloadEndpoint="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.735 [INFO][3603] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" HandleID="k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Workload="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.759 [INFO][3603] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" HandleID="k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Workload="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003129d0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.30.13", "pod":"calico-kube-controllers-789ffc5c9d-vhwwt", "timestamp":"2025-02-13 15:44:13.735307869 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.759 [INFO][3603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.759 [INFO][3603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.760 [INFO][3603] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13'
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.763 [INFO][3603] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.774 [INFO][3603] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.789 [INFO][3603] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.792 [INFO][3603] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.804 [INFO][3603] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.804 [INFO][3603] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.808 [INFO][3603] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.817 [INFO][3603] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.839 [INFO][3603] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.193/26] block=192.168.19.192/26 handle="k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.839 [INFO][3603] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.193/26] handle="k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" host="172.31.30.13"
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.839 [INFO][3603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:44:13.894869 containerd[1896]: 2025-02-13 15:44:13.839 [INFO][3603] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.193/26] IPv6=[] ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" HandleID="k8s-pod-network.73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Workload="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0"
Feb 13 15:44:13.896353 containerd[1896]: 2025-02-13 15:44:13.842 [INFO][3534] cni-plugin/k8s.go 386: Populated endpoint ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Namespace="calico-system" Pod="calico-kube-controllers-789ffc5c9d-vhwwt" WorkloadEndpoint="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0", GenerateName:"calico-kube-controllers-789ffc5c9d-", Namespace:"calico-system", SelfLink:"", UID:"8e0f5901-a2ec-434f-bd84-4e9bf59ff236", ResourceVersion:"1261", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 44, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"789ffc5c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"calico-kube-controllers-789ffc5c9d-vhwwt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali374e489825b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:44:13.896353 containerd[1896]: 2025-02-13 15:44:13.843 [INFO][3534] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.193/32] ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Namespace="calico-system" Pod="calico-kube-controllers-789ffc5c9d-vhwwt" WorkloadEndpoint="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0"
Feb 13 15:44:13.896353 containerd[1896]: 2025-02-13 15:44:13.843 [INFO][3534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali374e489825b ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Namespace="calico-system" Pod="calico-kube-controllers-789ffc5c9d-vhwwt" WorkloadEndpoint="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0"
Feb 13 15:44:13.896353 containerd[1896]: 2025-02-13 15:44:13.861 [INFO][3534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Namespace="calico-system" Pod="calico-kube-controllers-789ffc5c9d-vhwwt" WorkloadEndpoint="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0"
Feb 13 15:44:13.896353 containerd[1896]: 2025-02-13 15:44:13.862 [INFO][3534] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Namespace="calico-system" Pod="calico-kube-controllers-789ffc5c9d-vhwwt" WorkloadEndpoint="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0", GenerateName:"calico-kube-controllers-789ffc5c9d-", Namespace:"calico-system", SelfLink:"", UID:"8e0f5901-a2ec-434f-bd84-4e9bf59ff236", ResourceVersion:"1261", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 44, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"789ffc5c9d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c", Pod:"calico-kube-controllers-789ffc5c9d-vhwwt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali374e489825b", MAC:"8e:aa:6b:10:93:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:44:13.896353 containerd[1896]: 2025-02-13 15:44:13.892 [INFO][3534] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c" Namespace="calico-system" Pod="calico-kube-controllers-789ffc5c9d-vhwwt" WorkloadEndpoint="172.31.30.13-k8s-calico--kube--controllers--789ffc5c9d--vhwwt-eth0"
Feb 13 15:44:13.925160 containerd[1896]: time="2025-02-13T15:44:13.925046551Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:44:13.925160 containerd[1896]: time="2025-02-13T15:44:13.925128945Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:44:13.925675 containerd[1896]: time="2025-02-13T15:44:13.925463693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:13.925675 containerd[1896]: time="2025-02-13T15:44:13.925594980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:13.952438 systemd[1]: Started cri-containerd-73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c.scope - libcontainer container 73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c.
Feb 13 15:44:13.955655 systemd-networkd[1817]: calif2a1d778a7b: Link UP
Feb 13 15:44:13.955989 systemd-networkd[1817]: calif2a1d778a7b: Gained carrier
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.529 [INFO][3558] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.602 [INFO][3558] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-csi--node--driver--wpjns-eth0 csi-node-driver- calico-system  0c126d30-b7f3-49c7-adb0-7d602f0e81f4 1072 0 2025-02-13 15:43:48 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  172.31.30.13  csi-node-driver-wpjns eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] calif2a1d778a7b  [] []}} ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Namespace="calico-system" Pod="csi-node-driver-wpjns" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--wpjns-"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.602 [INFO][3558] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Namespace="calico-system" Pod="csi-node-driver-wpjns" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--wpjns-eth0"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.740 [INFO][3612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" HandleID="k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Workload="172.31.30.13-k8s-csi--node--driver--wpjns-eth0"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.760 [INFO][3612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" HandleID="k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Workload="172.31.30.13-k8s-csi--node--driver--wpjns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d9b0), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.30.13", "pod":"csi-node-driver-wpjns", "timestamp":"2025-02-13 15:44:13.740659827 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.760 [INFO][3612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.840 [INFO][3612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.840 [INFO][3612] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13'
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.868 [INFO][3612] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.884 [INFO][3612] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.898 [INFO][3612] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.905 [INFO][3612] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.909 [INFO][3612] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.910 [INFO][3612] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.915 [INFO][3612] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.925 [INFO][3612] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.943 [INFO][3612] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.194/26] block=192.168.19.192/26 handle="k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.944 [INFO][3612] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.194/26] handle="k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" host="172.31.30.13"
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.944 [INFO][3612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:44:13.997738 containerd[1896]: 2025-02-13 15:44:13.944 [INFO][3612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.194/26] IPv6=[] ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" HandleID="k8s-pod-network.e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Workload="172.31.30.13-k8s-csi--node--driver--wpjns-eth0"
Feb 13 15:44:14.005558 containerd[1896]: 2025-02-13 15:44:13.946 [INFO][3558] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Namespace="calico-system" Pod="csi-node-driver-wpjns" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--wpjns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-csi--node--driver--wpjns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c126d30-b7f3-49c7-adb0-7d602f0e81f4", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 43, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"csi-node-driver-wpjns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2a1d778a7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:44:14.005558 containerd[1896]: 2025-02-13 15:44:13.947 [INFO][3558] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.194/32] ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Namespace="calico-system" Pod="csi-node-driver-wpjns" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--wpjns-eth0"
Feb 13 15:44:14.005558 containerd[1896]: 2025-02-13 15:44:13.947 [INFO][3558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif2a1d778a7b ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Namespace="calico-system" Pod="csi-node-driver-wpjns" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--wpjns-eth0"
Feb 13 15:44:14.005558 containerd[1896]: 2025-02-13 15:44:13.957 [INFO][3558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Namespace="calico-system" Pod="csi-node-driver-wpjns" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--wpjns-eth0"
Feb 13 15:44:14.005558 containerd[1896]: 2025-02-13 15:44:13.959 [INFO][3558] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Namespace="calico-system" Pod="csi-node-driver-wpjns" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--wpjns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-csi--node--driver--wpjns-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0c126d30-b7f3-49c7-adb0-7d602f0e81f4", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 43, 48, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151", Pod:"csi-node-driver-wpjns", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif2a1d778a7b", MAC:"c2:fe:2c:4a:53:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:44:14.005558 containerd[1896]: 2025-02-13 15:44:13.993 [INFO][3558] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151" Namespace="calico-system" Pod="csi-node-driver-wpjns" WorkloadEndpoint="172.31.30.13-k8s-csi--node--driver--wpjns-eth0"
Feb 13 15:44:14.048245 systemd[1]: run-netns-cni\x2d24604f99\x2d201f\x2dcb5c\x2d9138\x2d8b947b010353.mount: Deactivated successfully.
Feb 13 15:44:14.131187 systemd-networkd[1817]: calic39d6d2a09c: Link UP
Feb 13 15:44:14.131676 systemd-networkd[1817]: calic39d6d2a09c: Gained carrier
Feb 13 15:44:14.134663 containerd[1896]: time="2025-02-13T15:44:14.134621019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-789ffc5c9d-vhwwt,Uid:8e0f5901-a2ec-434f-bd84-4e9bf59ff236,Namespace:calico-system,Attempt:4,} returns sandbox id \"73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c\""
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.521 [INFO][3569] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.600 [INFO][3569] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0 nginx-deployment-8587fbcb89- default  f00d591e-7798-4af4-9c36-f83590ed4ecd 1238 0 2025-02-13 15:44:08 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  172.31.30.13  nginx-deployment-8587fbcb89-9dw5w eth0 default [] []   [kns.default ksa.default.default] calic39d6d2a09c  [] []}} ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Namespace="default" Pod="nginx-deployment-8587fbcb89-9dw5w" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.600 [INFO][3569] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Namespace="default" Pod="nginx-deployment-8587fbcb89-9dw5w" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.744 [INFO][3602] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" HandleID="k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Workload="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.761 [INFO][3602] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" HandleID="k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Workload="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003179a0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.13", "pod":"nginx-deployment-8587fbcb89-9dw5w", "timestamp":"2025-02-13 15:44:13.743284347 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.761 [INFO][3602] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.944 [INFO][3602] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.944 [INFO][3602] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13'
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.971 [INFO][3602] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:13.992 [INFO][3602] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.040 [INFO][3602] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.050 [INFO][3602] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.055 [INFO][3602] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.055 [INFO][3602] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.059 [INFO][3602] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.081 [INFO][3602] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.099 [INFO][3602] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.195/26] block=192.168.19.192/26 handle="k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.099 [INFO][3602] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.195/26] handle="k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" host="172.31.30.13"
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.099 [INFO][3602] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:44:14.158350 containerd[1896]: 2025-02-13 15:44:14.099 [INFO][3602] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.195/26] IPv6=[] ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" HandleID="k8s-pod-network.3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Workload="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0"
Feb 13 15:44:14.160334 containerd[1896]: 2025-02-13 15:44:14.114 [INFO][3569] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Namespace="default" Pod="nginx-deployment-8587fbcb89-9dw5w" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f00d591e-7798-4af4-9c36-f83590ed4ecd", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 44, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-9dw5w", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic39d6d2a09c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:44:14.160334 containerd[1896]: 2025-02-13 15:44:14.114 [INFO][3569] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.195/32] ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Namespace="default" Pod="nginx-deployment-8587fbcb89-9dw5w" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0"
Feb 13 15:44:14.160334 containerd[1896]: 2025-02-13 15:44:14.114 [INFO][3569] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic39d6d2a09c ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Namespace="default" Pod="nginx-deployment-8587fbcb89-9dw5w" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0"
Feb 13 15:44:14.160334 containerd[1896]: 2025-02-13 15:44:14.136 [INFO][3569] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Namespace="default" Pod="nginx-deployment-8587fbcb89-9dw5w" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0"
Feb 13 15:44:14.160334 containerd[1896]: 2025-02-13 15:44:14.137 [INFO][3569] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Namespace="default" Pod="nginx-deployment-8587fbcb89-9dw5w" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"f00d591e-7798-4af4-9c36-f83590ed4ecd", ResourceVersion:"1238", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 44, 8, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d", Pod:"nginx-deployment-8587fbcb89-9dw5w", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic39d6d2a09c", MAC:"8e:03:de:d7:ea:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:44:14.160334 containerd[1896]: 2025-02-13 15:44:14.152 [INFO][3569] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d" Namespace="default" Pod="nginx-deployment-8587fbcb89-9dw5w" WorkloadEndpoint="172.31.30.13-k8s-nginx--deployment--8587fbcb89--9dw5w-eth0"
Feb 13 15:44:14.164953 containerd[1896]: time="2025-02-13T15:44:14.163539445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:44:14.164953 containerd[1896]: time="2025-02-13T15:44:14.163614435Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:44:14.164953 containerd[1896]: time="2025-02-13T15:44:14.163641026Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:14.164953 containerd[1896]: time="2025-02-13T15:44:14.163755458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:14.223167 systemd[1]: Started cri-containerd-e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151.scope - libcontainer container e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151.
Feb 13 15:44:14.238526 containerd[1896]: time="2025-02-13T15:44:14.238176723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:44:14.238526 containerd[1896]: time="2025-02-13T15:44:14.238250760Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:44:14.238526 containerd[1896]: time="2025-02-13T15:44:14.238275756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:14.238526 containerd[1896]: time="2025-02-13T15:44:14.238380149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:14.292905 systemd[1]: Started cri-containerd-3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d.scope - libcontainer container 3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d.
Feb 13 15:44:14.301769 containerd[1896]: time="2025-02-13T15:44:14.301729695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wpjns,Uid:0c126d30-b7f3-49c7-adb0-7d602f0e81f4,Namespace:calico-system,Attempt:9,} returns sandbox id \"e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151\""
Feb 13 15:44:14.377822 containerd[1896]: time="2025-02-13T15:44:14.377747387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-9dw5w,Uid:f00d591e-7798-4af4-9c36-f83590ed4ecd,Namespace:default,Attempt:5,} returns sandbox id \"3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d\""
Feb 13 15:44:14.702157 kubelet[2396]: E0213 15:44:14.702116    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:15.158705 update_engine[1888]: I20250213 15:44:15.158555  1888 update_attempter.cc:509] Updating boot flags...
Feb 13 15:44:15.274972 systemd-networkd[1817]: cali374e489825b: Gained IPv6LL
Feb 13 15:44:15.420758 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 40 scanned by (udev-worker) (3890)
Feb 13 15:44:15.529487 systemd-networkd[1817]: calif2a1d778a7b: Gained IPv6LL
Feb 13 15:44:15.717859 kubelet[2396]: E0213 15:44:15.702648    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:15.848511 systemd-networkd[1817]: calic39d6d2a09c: Gained IPv6LL
Feb 13 15:44:16.111979 containerd[1896]: time="2025-02-13T15:44:16.111570205Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:16.113612 containerd[1896]: time="2025-02-13T15:44:16.113191575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141"
Feb 13 15:44:16.115692 containerd[1896]: time="2025-02-13T15:44:16.114482435Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:16.116829 containerd[1896]: time="2025-02-13T15:44:16.116655626Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:16.117983 containerd[1896]: time="2025-02-13T15:44:16.117768598Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.45992336s"
Feb 13 15:44:16.117983 containerd[1896]: time="2025-02-13T15:44:16.117829322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\""
Feb 13 15:44:16.120284 containerd[1896]: time="2025-02-13T15:44:16.120255313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\""
Feb 13 15:44:16.138233 containerd[1896]: time="2025-02-13T15:44:16.138187706Z" level=info msg="CreateContainer within sandbox \"e7d6233b23ad1ab464ea2a1b2746688bea7f86d423a4d4d4adf641c417d0895a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Feb 13 15:44:16.161524 containerd[1896]: time="2025-02-13T15:44:16.161481428Z" level=info msg="CreateContainer within sandbox \"e7d6233b23ad1ab464ea2a1b2746688bea7f86d423a4d4d4adf641c417d0895a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"81d0c11eac8402fd2d705d1f3c6fd857aeee7071e4fe440e0c8cf397615ad860\""
Feb 13 15:44:16.162465 containerd[1896]: time="2025-02-13T15:44:16.162426788Z" level=info msg="StartContainer for \"81d0c11eac8402fd2d705d1f3c6fd857aeee7071e4fe440e0c8cf397615ad860\""
Feb 13 15:44:16.216418 systemd[1]: Started cri-containerd-81d0c11eac8402fd2d705d1f3c6fd857aeee7071e4fe440e0c8cf397615ad860.scope - libcontainer container 81d0c11eac8402fd2d705d1f3c6fd857aeee7071e4fe440e0c8cf397615ad860.
Feb 13 15:44:16.347516 containerd[1896]: time="2025-02-13T15:44:16.347468864Z" level=info msg="StartContainer for \"81d0c11eac8402fd2d705d1f3c6fd857aeee7071e4fe440e0c8cf397615ad860\" returns successfully"
Feb 13 15:44:16.703299 kubelet[2396]: E0213 15:44:16.702872    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:17.433009 kubelet[2396]: I0213 15:44:17.432970    2396 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 15:44:17.705819 kubelet[2396]: E0213 15:44:17.704103    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:18.363010 ntpd[1878]: Listen normally on 7 cali374e489825b [fe80::ecee:eeff:feee:eeee%3]:123
Feb 13 15:44:18.365822 ntpd[1878]: 13 Feb 15:44:18 ntpd[1878]: Listen normally on 7 cali374e489825b [fe80::ecee:eeff:feee:eeee%3]:123
Feb 13 15:44:18.365822 ntpd[1878]: 13 Feb 15:44:18 ntpd[1878]: Listen normally on 8 calif2a1d778a7b [fe80::ecee:eeff:feee:eeee%4]:123
Feb 13 15:44:18.365822 ntpd[1878]: 13 Feb 15:44:18 ntpd[1878]: Listen normally on 9 calic39d6d2a09c [fe80::ecee:eeff:feee:eeee%5]:123
Feb 13 15:44:18.364040 ntpd[1878]: Listen normally on 8 calif2a1d778a7b [fe80::ecee:eeff:feee:eeee%4]:123
Feb 13 15:44:18.364083 ntpd[1878]: Listen normally on 9 calic39d6d2a09c [fe80::ecee:eeff:feee:eeee%5]:123
Feb 13 15:44:18.511307 containerd[1896]: time="2025-02-13T15:44:18.511244932Z" level=info msg="Kill container \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\""
Feb 13 15:44:18.537046 systemd[1]: cri-containerd-3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453.scope: Deactivated successfully.
Feb 13 15:44:18.537387 systemd[1]: cri-containerd-3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453.scope: Consumed 1.317s CPU time, 130.1M memory peak, 4K read from disk, 552K written to disk.
Feb 13 15:44:18.590662 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453-rootfs.mount: Deactivated successfully.
Feb 13 15:44:18.706738 kubelet[2396]: E0213 15:44:18.706691    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:18.792111 containerd[1896]: time="2025-02-13T15:44:18.791530402Z" level=info msg="shim disconnected" id=3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453 namespace=k8s.io
Feb 13 15:44:18.792111 containerd[1896]: time="2025-02-13T15:44:18.791598343Z" level=warning msg="cleaning up after shim disconnected" id=3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453 namespace=k8s.io
Feb 13 15:44:18.792111 containerd[1896]: time="2025-02-13T15:44:18.791610556Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:44:18.826278 containerd[1896]: time="2025-02-13T15:44:18.826234011Z" level=info msg="StopContainer for \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\" returns successfully"
Feb 13 15:44:18.828538 containerd[1896]: time="2025-02-13T15:44:18.828502449Z" level=info msg="StopPodSandbox for \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\""
Feb 13 15:44:18.828697 containerd[1896]: time="2025-02-13T15:44:18.828554643Z" level=info msg="Container to stop \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:44:18.828757 containerd[1896]: time="2025-02-13T15:44:18.828702806Z" level=info msg="Container to stop \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:44:18.828927 containerd[1896]: time="2025-02-13T15:44:18.828815247Z" level=info msg="Container to stop \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\" must be in running or unknown state, current state \"CONTAINER_EXITED\""
Feb 13 15:44:18.835535 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe-shm.mount: Deactivated successfully.
Feb 13 15:44:18.848276 systemd[1]: cri-containerd-e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe.scope: Deactivated successfully.
Feb 13 15:44:18.902339 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe-rootfs.mount: Deactivated successfully.
Feb 13 15:44:18.923443 containerd[1896]: time="2025-02-13T15:44:18.923175746Z" level=info msg="shim disconnected" id=e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe namespace=k8s.io
Feb 13 15:44:18.923443 containerd[1896]: time="2025-02-13T15:44:18.923235890Z" level=warning msg="cleaning up after shim disconnected" id=e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe namespace=k8s.io
Feb 13 15:44:18.923443 containerd[1896]: time="2025-02-13T15:44:18.923248890Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:44:18.948898 containerd[1896]: time="2025-02-13T15:44:18.948626713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192"
Feb 13 15:44:18.951004 containerd[1896]: time="2025-02-13T15:44:18.949378482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:18.951004 containerd[1896]: time="2025-02-13T15:44:18.950835510Z" level=info msg="TearDown network for sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" successfully"
Feb 13 15:44:18.951004 containerd[1896]: time="2025-02-13T15:44:18.950861368Z" level=info msg="StopPodSandbox for \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" returns successfully"
Feb 13 15:44:18.951698 containerd[1896]: time="2025-02-13T15:44:18.951669347Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:18.955681 containerd[1896]: time="2025-02-13T15:44:18.955554227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:18.956676 containerd[1896]: time="2025-02-13T15:44:18.956639979Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.836007485s"
Feb 13 15:44:18.957484 containerd[1896]: time="2025-02-13T15:44:18.956684390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\""
Feb 13 15:44:18.960303 containerd[1896]: time="2025-02-13T15:44:18.959788062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Feb 13 15:44:18.970786 containerd[1896]: time="2025-02-13T15:44:18.970598493Z" level=info msg="CreateContainer within sandbox \"73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Feb 13 15:44:18.994634 containerd[1896]: time="2025-02-13T15:44:18.994585418Z" level=info msg="CreateContainer within sandbox \"73c4c15668d908e3c59773e751dc3bdfe2ddd968159a1e7b4d606f82f9a6294c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1804e5dc75263adb732df6d4a75846874642bde926d77917ff6d9325ea4dee90\""
Feb 13 15:44:18.995265 containerd[1896]: time="2025-02-13T15:44:18.995233608Z" level=info msg="StartContainer for \"1804e5dc75263adb732df6d4a75846874642bde926d77917ff6d9325ea4dee90\""
Feb 13 15:44:19.028316 systemd[1]: Started cri-containerd-1804e5dc75263adb732df6d4a75846874642bde926d77917ff6d9325ea4dee90.scope - libcontainer container 1804e5dc75263adb732df6d4a75846874642bde926d77917ff6d9325ea4dee90.
Feb 13 15:44:19.028944 kubelet[2396]: I0213 15:44:19.028663    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7584554fbb-ln4wz" podStartSLOduration=4.131736711 podStartE2EDuration="14.028644407s" podCreationTimestamp="2025-02-13 15:44:05 +0000 UTC" firstStartedPulling="2025-02-13 15:44:06.222461408 +0000 UTC m=+18.304343689" lastFinishedPulling="2025-02-13 15:44:16.119369101 +0000 UTC m=+28.201251385" observedRunningTime="2025-02-13 15:44:16.502307435 +0000 UTC m=+28.584189728" watchObservedRunningTime="2025-02-13 15:44:19.028644407 +0000 UTC m=+31.110526714"
Feb 13 15:44:19.076814 kubelet[2396]: I0213 15:44:19.074856    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-flexvol-driver-host\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.076814 kubelet[2396]: I0213 15:44:19.074941    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-var-run-calico\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.076814 kubelet[2396]: I0213 15:44:19.074969    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-xtables-lock\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.076814 kubelet[2396]: I0213 15:44:19.074992    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-lib-modules\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.076814 kubelet[2396]: I0213 15:44:19.075059    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b44n4\" (UniqueName: \"kubernetes.io/projected/19a347ef-053e-4125-9ac3-cf8bf1913eee-kube-api-access-b44n4\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.076814 kubelet[2396]: I0213 15:44:19.075082    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-bin-dir\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.077336 kubelet[2396]: I0213 15:44:19.075139    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-log-dir\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.077336 kubelet[2396]: I0213 15:44:19.075170    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19a347ef-053e-4125-9ac3-cf8bf1913eee-tigera-ca-bundle\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.077336 kubelet[2396]: I0213 15:44:19.075261    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19a347ef-053e-4125-9ac3-cf8bf1913eee-node-certs\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.077336 kubelet[2396]: I0213 15:44:19.075315    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-net-dir\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.077336 kubelet[2396]: I0213 15:44:19.075340    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-var-lib-calico\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.077336 kubelet[2396]: I0213 15:44:19.075394    2396 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-policysync\") pod \"19a347ef-053e-4125-9ac3-cf8bf1913eee\" (UID: \"19a347ef-053e-4125-9ac3-cf8bf1913eee\") "
Feb 13 15:44:19.077592 kubelet[2396]: I0213 15:44:19.075672    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-policysync" (OuterVolumeSpecName: "policysync") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.077592 kubelet[2396]: I0213 15:44:19.075733    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.077592 kubelet[2396]: I0213 15:44:19.075758    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.077592 kubelet[2396]: I0213 15:44:19.075779    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.077592 kubelet[2396]: I0213 15:44:19.075816    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.079668 kubelet[2396]: I0213 15:44:19.079500    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.080270 kubelet[2396]: I0213 15:44:19.079778    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.080270 kubelet[2396]: I0213 15:44:19.079859    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.080270 kubelet[2396]: I0213 15:44:19.079895    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Feb 13 15:44:19.088198 containerd[1896]: time="2025-02-13T15:44:19.087667734Z" level=info msg="StartContainer for \"1804e5dc75263adb732df6d4a75846874642bde926d77917ff6d9325ea4dee90\" returns successfully"
Feb 13 15:44:19.091319 kubelet[2396]: I0213 15:44:19.091275    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19a347ef-053e-4125-9ac3-cf8bf1913eee-kube-api-access-b44n4" (OuterVolumeSpecName: "kube-api-access-b44n4") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "kube-api-access-b44n4". PluginName "kubernetes.io/projected", VolumeGidValue ""
Feb 13 15:44:19.091707 kubelet[2396]: E0213 15:44:19.091679    2396 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19a347ef-053e-4125-9ac3-cf8bf1913eee" containerName="flexvol-driver"
Feb 13 15:44:19.091707 kubelet[2396]: E0213 15:44:19.091702    2396 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19a347ef-053e-4125-9ac3-cf8bf1913eee" containerName="install-cni"
Feb 13 15:44:19.091935 kubelet[2396]: E0213 15:44:19.091915    2396 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="19a347ef-053e-4125-9ac3-cf8bf1913eee" containerName="calico-node"
Feb 13 15:44:19.092180 kubelet[2396]: I0213 15:44:19.092154    2396 memory_manager.go:354] "RemoveStaleState removing state" podUID="19a347ef-053e-4125-9ac3-cf8bf1913eee" containerName="calico-node"
Feb 13 15:44:19.093361 kubelet[2396]: I0213 15:44:19.093318    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19a347ef-053e-4125-9ac3-cf8bf1913eee-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Feb 13 15:44:19.098264 kubelet[2396]: I0213 15:44:19.098063    2396 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19a347ef-053e-4125-9ac3-cf8bf1913eee-node-certs" (OuterVolumeSpecName: "node-certs") pod "19a347ef-053e-4125-9ac3-cf8bf1913eee" (UID: "19a347ef-053e-4125-9ac3-cf8bf1913eee"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue ""
Feb 13 15:44:19.106660 systemd[1]: Created slice kubepods-besteffort-pode4e516d7_96d2_4f05_a025_b486acd937f3.slice - libcontainer container kubepods-besteffort-pode4e516d7_96d2_4f05_a025_b486acd937f3.slice.
Feb 13 15:44:19.176673 kubelet[2396]: I0213 15:44:19.176625    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e4e516d7-96d2-4f05-a025-b486acd937f3-node-certs\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.176841 kubelet[2396]: I0213 15:44:19.176713    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-flexvol-driver-host\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.176841 kubelet[2396]: I0213 15:44:19.176745    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-lib-modules\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.176963 kubelet[2396]: I0213 15:44:19.176861    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-var-run-calico\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.176963 kubelet[2396]: I0213 15:44:19.176885    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4e516d7-96d2-4f05-a025-b486acd937f3-tigera-ca-bundle\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.176963 kubelet[2396]: I0213 15:44:19.176944    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-xtables-lock\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.177105 kubelet[2396]: I0213 15:44:19.177018    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-var-lib-calico\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.177105 kubelet[2396]: I0213 15:44:19.177046    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-cni-net-dir\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.177105 kubelet[2396]: I0213 15:44:19.177095    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-cni-log-dir\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.177224 kubelet[2396]: I0213 15:44:19.177153    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87v9\" (UniqueName: \"kubernetes.io/projected/e4e516d7-96d2-4f05-a025-b486acd937f3-kube-api-access-p87v9\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.177367 kubelet[2396]: I0213 15:44:19.177340    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-policysync\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.177436 kubelet[2396]: I0213 15:44:19.177408    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e4e516d7-96d2-4f05-a025-b486acd937f3-cni-bin-dir\") pod \"calico-node-4vc9j\" (UID: \"e4e516d7-96d2-4f05-a025-b486acd937f3\") " pod="calico-system/calico-node-4vc9j"
Feb 13 15:44:19.177495 kubelet[2396]: I0213 15:44:19.177478    2396 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-b44n4\" (UniqueName: \"kubernetes.io/projected/19a347ef-053e-4125-9ac3-cf8bf1913eee-kube-api-access-b44n4\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177545 kubelet[2396]: I0213 15:44:19.177500    2396 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-bin-dir\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177545 kubelet[2396]: I0213 15:44:19.177531    2396 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-log-dir\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177682 kubelet[2396]: I0213 15:44:19.177545    2396 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/19a347ef-053e-4125-9ac3-cf8bf1913eee-tigera-ca-bundle\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177682 kubelet[2396]: I0213 15:44:19.177558    2396 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-var-lib-calico\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177682 kubelet[2396]: I0213 15:44:19.177570    2396 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/19a347ef-053e-4125-9ac3-cf8bf1913eee-node-certs\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177682 kubelet[2396]: I0213 15:44:19.177581    2396 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-cni-net-dir\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177682 kubelet[2396]: I0213 15:44:19.177593    2396 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-policysync\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177682 kubelet[2396]: I0213 15:44:19.177607    2396 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-var-run-calico\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177682 kubelet[2396]: I0213 15:44:19.177619    2396 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-flexvol-driver-host\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177682 kubelet[2396]: I0213 15:44:19.177632    2396 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-lib-modules\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.177921 kubelet[2396]: I0213 15:44:19.177643    2396 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/19a347ef-053e-4125-9ac3-cf8bf1913eee-xtables-lock\") on node \"172.31.30.13\" DevicePath \"\""
Feb 13 15:44:19.413944 containerd[1896]: time="2025-02-13T15:44:19.413293028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4vc9j,Uid:e4e516d7-96d2-4f05-a025-b486acd937f3,Namespace:calico-system,Attempt:0,}"
Feb 13 15:44:19.441595 containerd[1896]: time="2025-02-13T15:44:19.441286835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:44:19.441595 containerd[1896]: time="2025-02-13T15:44:19.441353697Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:44:19.441595 containerd[1896]: time="2025-02-13T15:44:19.441370483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:19.441595 containerd[1896]: time="2025-02-13T15:44:19.441461037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:19.451776 kubelet[2396]: I0213 15:44:19.450387    2396 scope.go:117] "RemoveContainer" containerID="3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453"
Feb 13 15:44:19.452974 containerd[1896]: time="2025-02-13T15:44:19.452932366Z" level=info msg="RemoveContainer for \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\""
Feb 13 15:44:19.464328 systemd[1]: Removed slice kubepods-besteffort-pod19a347ef_053e_4125_9ac3_cf8bf1913eee.slice - libcontainer container kubepods-besteffort-pod19a347ef_053e_4125_9ac3_cf8bf1913eee.slice.
Feb 13 15:44:19.464536 systemd[1]: kubepods-besteffort-pod19a347ef_053e_4125_9ac3_cf8bf1913eee.slice: Consumed 1.912s CPU time, 231.5M memory peak, 4K read from disk, 158M written to disk.
Feb 13 15:44:19.466540 containerd[1896]: time="2025-02-13T15:44:19.466497366Z" level=info msg="RemoveContainer for \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\" returns successfully"
Feb 13 15:44:19.469610 kubelet[2396]: I0213 15:44:19.469434    2396 scope.go:117] "RemoveContainer" containerID="d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808"
Feb 13 15:44:19.472564 containerd[1896]: time="2025-02-13T15:44:19.472400341Z" level=info msg="RemoveContainer for \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\""
Feb 13 15:44:19.481360 containerd[1896]: time="2025-02-13T15:44:19.481309247Z" level=info msg="RemoveContainer for \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\" returns successfully"
Feb 13 15:44:19.481642 kubelet[2396]: I0213 15:44:19.481621    2396 scope.go:117] "RemoveContainer" containerID="8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de"
Feb 13 15:44:19.483769 kubelet[2396]: I0213 15:44:19.483710    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-789ffc5c9d-vhwwt" podStartSLOduration=6.661704766 podStartE2EDuration="11.483691225s" podCreationTimestamp="2025-02-13 15:44:08 +0000 UTC" firstStartedPulling="2025-02-13 15:44:14.136775102 +0000 UTC m=+26.218657391" lastFinishedPulling="2025-02-13 15:44:18.958761569 +0000 UTC m=+31.040643850" observedRunningTime="2025-02-13 15:44:19.482050029 +0000 UTC m=+31.563932339" watchObservedRunningTime="2025-02-13 15:44:19.483691225 +0000 UTC m=+31.565573500"
Feb 13 15:44:19.485037 systemd[1]: Started cri-containerd-6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad.scope - libcontainer container 6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad.
Feb 13 15:44:19.490677 containerd[1896]: time="2025-02-13T15:44:19.490165981Z" level=info msg="RemoveContainer for \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\""
Feb 13 15:44:19.494490 containerd[1896]: time="2025-02-13T15:44:19.494175386Z" level=info msg="RemoveContainer for \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\" returns successfully"
Feb 13 15:44:19.495358 kubelet[2396]: I0213 15:44:19.495084    2396 scope.go:117] "RemoveContainer" containerID="3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453"
Feb 13 15:44:19.496198 containerd[1896]: time="2025-02-13T15:44:19.496111856Z" level=error msg="ContainerStatus for \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\": not found"
Feb 13 15:44:19.496868 kubelet[2396]: E0213 15:44:19.496538    2396 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\": not found" containerID="3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453"
Feb 13 15:44:19.496868 kubelet[2396]: I0213 15:44:19.496577    2396 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453"} err="failed to get container status \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\": rpc error: code = NotFound desc = an error occurred when try to find container \"3a2295398bb35800bb2fc5b5ed1d8f2fa5055c14e7dfbc8ed214cbb9bede0453\": not found"
Feb 13 15:44:19.496868 kubelet[2396]: I0213 15:44:19.496635    2396 scope.go:117] "RemoveContainer" containerID="d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808"
Feb 13 15:44:19.497641 containerd[1896]: time="2025-02-13T15:44:19.497260107Z" level=error msg="ContainerStatus for \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\": not found"
Feb 13 15:44:19.497896 kubelet[2396]: E0213 15:44:19.497739    2396 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\": not found" containerID="d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808"
Feb 13 15:44:19.498072 kubelet[2396]: I0213 15:44:19.497977    2396 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808"} err="failed to get container status \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\": rpc error: code = NotFound desc = an error occurred when try to find container \"d6976f2e0819a61fd248f9b1623666f0795d6a8b7784daf00f51cef091581808\": not found"
Feb 13 15:44:19.498072 kubelet[2396]: I0213 15:44:19.498006    2396 scope.go:117] "RemoveContainer" containerID="8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de"
Feb 13 15:44:19.498525 containerd[1896]: time="2025-02-13T15:44:19.498483457Z" level=error msg="ContainerStatus for \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\": not found"
Feb 13 15:44:19.499173 kubelet[2396]: E0213 15:44:19.498706    2396 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\": not found" containerID="8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de"
Feb 13 15:44:19.499173 kubelet[2396]: I0213 15:44:19.498734    2396 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de"} err="failed to get container status \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a7fcb45266decfa4b570b9e98c2efdb961d162d9ac2d140d56058d229f353de\": not found"
Feb 13 15:44:19.522706 containerd[1896]: time="2025-02-13T15:44:19.522660050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4vc9j,Uid:e4e516d7-96d2-4f05-a025-b486acd937f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad\""
Feb 13 15:44:19.525767 containerd[1896]: time="2025-02-13T15:44:19.525731780Z" level=info msg="CreateContainer within sandbox \"6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Feb 13 15:44:19.540868 containerd[1896]: time="2025-02-13T15:44:19.540821119Z" level=info msg="CreateContainer within sandbox \"6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a\""
Feb 13 15:44:19.541385 containerd[1896]: time="2025-02-13T15:44:19.541253200Z" level=info msg="StartContainer for \"093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a\""
Feb 13 15:44:19.587095 systemd[1]: Started cri-containerd-093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a.scope - libcontainer container 093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a.
Feb 13 15:44:19.618145 systemd[1]: var-lib-kubelet-pods-19a347ef\x2d053e\x2d4125\x2d9ac3\x2dcf8bf1913eee-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully.
Feb 13 15:44:19.618390 systemd[1]: var-lib-kubelet-pods-19a347ef\x2d053e\x2d4125\x2d9ac3\x2dcf8bf1913eee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db44n4.mount: Deactivated successfully.
Feb 13 15:44:19.618488 systemd[1]: var-lib-kubelet-pods-19a347ef\x2d053e\x2d4125\x2d9ac3\x2dcf8bf1913eee-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully.
Feb 13 15:44:19.668921 containerd[1896]: time="2025-02-13T15:44:19.668755260Z" level=info msg="StartContainer for \"093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a\" returns successfully"
Feb 13 15:44:19.707740 kubelet[2396]: E0213 15:44:19.707682    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:19.830214 systemd[1]: cri-containerd-093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a.scope: Deactivated successfully.
Feb 13 15:44:19.830597 systemd[1]: cri-containerd-093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a.scope: Consumed 36ms CPU time, 15.7M memory peak, 7.8M read from disk, 6.3M written to disk.
Feb 13 15:44:19.882022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a-rootfs.mount: Deactivated successfully.
Feb 13 15:44:19.900254 containerd[1896]: time="2025-02-13T15:44:19.899937236Z" level=info msg="shim disconnected" id=093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a namespace=k8s.io
Feb 13 15:44:19.900254 containerd[1896]: time="2025-02-13T15:44:19.900021053Z" level=warning msg="cleaning up after shim disconnected" id=093fc8cd7d2d201862274273b12ea80e5913124c387d42814cbee5ea40a0a79a namespace=k8s.io
Feb 13 15:44:19.900254 containerd[1896]: time="2025-02-13T15:44:19.900034715Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:44:20.457345 kubelet[2396]: I0213 15:44:20.457285    2396 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 15:44:20.466953 containerd[1896]: time="2025-02-13T15:44:20.466496315Z" level=info msg="CreateContainer within sandbox \"6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 15:44:20.526819 containerd[1896]: time="2025-02-13T15:44:20.524848439Z" level=info msg="CreateContainer within sandbox \"6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699\""
Feb 13 15:44:20.536263 containerd[1896]: time="2025-02-13T15:44:20.536206103Z" level=info msg="StartContainer for \"8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699\""
Feb 13 15:44:20.608480 systemd[1]: Started cri-containerd-8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699.scope - libcontainer container 8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699.
Feb 13 15:44:20.689498 containerd[1896]: time="2025-02-13T15:44:20.689331543Z" level=info msg="StartContainer for \"8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699\" returns successfully"
Feb 13 15:44:20.708169 kubelet[2396]: E0213 15:44:20.708043    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:20.845306 kubelet[2396]: I0213 15:44:20.845230    2396 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19a347ef-053e-4125-9ac3-cf8bf1913eee" path="/var/lib/kubelet/pods/19a347ef-053e-4125-9ac3-cf8bf1913eee/volumes"
Feb 13 15:44:20.851218 containerd[1896]: time="2025-02-13T15:44:20.851168199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:20.852222 containerd[1896]: time="2025-02-13T15:44:20.852173503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632"
Feb 13 15:44:20.854903 containerd[1896]: time="2025-02-13T15:44:20.853465428Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:20.856831 containerd[1896]: time="2025-02-13T15:44:20.856041056Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:20.856831 containerd[1896]: time="2025-02-13T15:44:20.856674591Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.896361896s"
Feb 13 15:44:20.856831 containerd[1896]: time="2025-02-13T15:44:20.856706030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\""
Feb 13 15:44:20.858171 containerd[1896]: time="2025-02-13T15:44:20.858141911Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:44:20.859058 containerd[1896]: time="2025-02-13T15:44:20.859028962Z" level=info msg="CreateContainer within sandbox \"e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Feb 13 15:44:20.884394 containerd[1896]: time="2025-02-13T15:44:20.884350253Z" level=info msg="CreateContainer within sandbox \"e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1f031c2ece6063f04e9c1fc57db118e93e8fc100853357f86330b16a5b1e5108\""
Feb 13 15:44:20.885261 containerd[1896]: time="2025-02-13T15:44:20.885158566Z" level=info msg="StartContainer for \"1f031c2ece6063f04e9c1fc57db118e93e8fc100853357f86330b16a5b1e5108\""
Feb 13 15:44:20.959244 systemd[1]: Started cri-containerd-1f031c2ece6063f04e9c1fc57db118e93e8fc100853357f86330b16a5b1e5108.scope - libcontainer container 1f031c2ece6063f04e9c1fc57db118e93e8fc100853357f86330b16a5b1e5108.
Feb 13 15:44:21.022139 containerd[1896]: time="2025-02-13T15:44:21.021789234Z" level=info msg="StartContainer for \"1f031c2ece6063f04e9c1fc57db118e93e8fc100853357f86330b16a5b1e5108\" returns successfully"
Feb 13 15:44:21.708745 kubelet[2396]: E0213 15:44:21.708690    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:22.054257 systemd[1]: cri-containerd-8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699.scope: Deactivated successfully.
Feb 13 15:44:22.055906 systemd[1]: cri-containerd-8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699.scope: Consumed 811ms CPU time, 88.6M memory peak, 71.4M read from disk.
Feb 13 15:44:22.100977 containerd[1896]: time="2025-02-13T15:44:22.100845850Z" level=info msg="shim disconnected" id=8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699 namespace=k8s.io
Feb 13 15:44:22.100977 containerd[1896]: time="2025-02-13T15:44:22.100918838Z" level=warning msg="cleaning up after shim disconnected" id=8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699 namespace=k8s.io
Feb 13 15:44:22.100977 containerd[1896]: time="2025-02-13T15:44:22.100932216Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 15:44:22.102650 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e93e247c828ea5c804e8af3d446d3b1412eb4bf8dd95675684edff975da3699-rootfs.mount: Deactivated successfully.
Feb 13 15:44:22.578603 containerd[1896]: time="2025-02-13T15:44:22.578556727Z" level=info msg="CreateContainer within sandbox \"6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Feb 13 15:44:22.630266 containerd[1896]: time="2025-02-13T15:44:22.630216405Z" level=info msg="CreateContainer within sandbox \"6b5d389bed11e082c8d157ce10fd177a735eff319adb8f37136881bea86a1bad\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"70e0bc3a7f2b7cd3ab6c6a0e0eefa2c09d2672f6abf3598894c23b67ddf2e6a3\""
Feb 13 15:44:22.632974 containerd[1896]: time="2025-02-13T15:44:22.631324570Z" level=info msg="StartContainer for \"70e0bc3a7f2b7cd3ab6c6a0e0eefa2c09d2672f6abf3598894c23b67ddf2e6a3\""
Feb 13 15:44:22.709531 kubelet[2396]: E0213 15:44:22.709485    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:22.768347 systemd[1]: Started cri-containerd-70e0bc3a7f2b7cd3ab6c6a0e0eefa2c09d2672f6abf3598894c23b67ddf2e6a3.scope - libcontainer container 70e0bc3a7f2b7cd3ab6c6a0e0eefa2c09d2672f6abf3598894c23b67ddf2e6a3.
Feb 13 15:44:22.835896 containerd[1896]: time="2025-02-13T15:44:22.835767431Z" level=info msg="StartContainer for \"70e0bc3a7f2b7cd3ab6c6a0e0eefa2c09d2672f6abf3598894c23b67ddf2e6a3\" returns successfully"
Feb 13 15:44:23.104442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3677489881.mount: Deactivated successfully.
Feb 13 15:44:23.522159 kubelet[2396]: I0213 15:44:23.518726    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4vc9j" podStartSLOduration=4.518701443 podStartE2EDuration="4.518701443s" podCreationTimestamp="2025-02-13 15:44:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-02-13 15:44:23.518422126 +0000 UTC m=+35.600304419" watchObservedRunningTime="2025-02-13 15:44:23.518701443 +0000 UTC m=+35.600583737"
Feb 13 15:44:23.709913 kubelet[2396]: E0213 15:44:23.709875    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:24.114264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136733948.mount: Deactivated successfully.
Feb 13 15:44:24.692515 systemd[1]: run-containerd-runc-k8s.io-70e0bc3a7f2b7cd3ab6c6a0e0eefa2c09d2672f6abf3598894c23b67ddf2e6a3-runc.BimOjh.mount: Deactivated successfully.
Feb 13 15:44:24.711166 kubelet[2396]: E0213 15:44:24.711124    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:25.713776 kubelet[2396]: E0213 15:44:25.713706    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:26.714986 kubelet[2396]: E0213 15:44:26.714837    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:26.967863 containerd[1896]: time="2025-02-13T15:44:26.967634489Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:26.969706 containerd[1896]: time="2025-02-13T15:44:26.969653457Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493"
Feb 13 15:44:26.970844 containerd[1896]: time="2025-02-13T15:44:26.970494981Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:26.975421 containerd[1896]: time="2025-02-13T15:44:26.975368702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:26.977328 containerd[1896]: time="2025-02-13T15:44:26.977283406Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 6.119102147s"
Feb 13 15:44:26.977494 containerd[1896]: time="2025-02-13T15:44:26.977331454Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\""
Feb 13 15:44:26.980001 containerd[1896]: time="2025-02-13T15:44:26.979709357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Feb 13 15:44:27.016607 containerd[1896]: time="2025-02-13T15:44:27.016559309Z" level=info msg="CreateContainer within sandbox \"3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 13 15:44:27.041177 containerd[1896]: time="2025-02-13T15:44:27.041043189Z" level=info msg="CreateContainer within sandbox \"3e78c39a31f6bbf49359894815f80ce17403d88b08bfdcf8b6b791018c5b727d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"bc2b24c65a8cbae7e8472188d7942e097dd1ad895398864727cd6541c51f228a\""
Feb 13 15:44:27.041735 containerd[1896]: time="2025-02-13T15:44:27.041693909Z" level=info msg="StartContainer for \"bc2b24c65a8cbae7e8472188d7942e097dd1ad895398864727cd6541c51f228a\""
Feb 13 15:44:27.123231 systemd[1]: Started cri-containerd-bc2b24c65a8cbae7e8472188d7942e097dd1ad895398864727cd6541c51f228a.scope - libcontainer container bc2b24c65a8cbae7e8472188d7942e097dd1ad895398864727cd6541c51f228a.
Feb 13 15:44:27.186231 containerd[1896]: time="2025-02-13T15:44:27.186183470Z" level=info msg="StartContainer for \"bc2b24c65a8cbae7e8472188d7942e097dd1ad895398864727cd6541c51f228a\" returns successfully"
Feb 13 15:44:27.595325 kubelet[2396]: I0213 15:44:27.595213    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-9dw5w" podStartSLOduration=6.997503601 podStartE2EDuration="19.59518958s" podCreationTimestamp="2025-02-13 15:44:08 +0000 UTC" firstStartedPulling="2025-02-13 15:44:14.381482785 +0000 UTC m=+26.463365057" lastFinishedPulling="2025-02-13 15:44:26.979168758 +0000 UTC m=+39.061051036" observedRunningTime="2025-02-13 15:44:27.594951792 +0000 UTC m=+39.676834108" watchObservedRunningTime="2025-02-13 15:44:27.59518958 +0000 UTC m=+39.677071875"
Feb 13 15:44:27.715473 kubelet[2396]: E0213 15:44:27.715430    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:28.665953 kubelet[2396]: E0213 15:44:28.665824    2396 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:28.716441 kubelet[2396]: E0213 15:44:28.716357    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:29.050674 containerd[1896]: time="2025-02-13T15:44:29.050617108Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:29.052133 containerd[1896]: time="2025-02-13T15:44:29.051991498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081"
Feb 13 15:44:29.057318 containerd[1896]: time="2025-02-13T15:44:29.055387084Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:29.059888 containerd[1896]: time="2025-02-13T15:44:29.058748385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:29.061519 containerd[1896]: time="2025-02-13T15:44:29.060736723Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.080983939s"
Feb 13 15:44:29.061519 containerd[1896]: time="2025-02-13T15:44:29.060871380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\""
Feb 13 15:44:29.063500 containerd[1896]: time="2025-02-13T15:44:29.063466929Z" level=info msg="CreateContainer within sandbox \"e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Feb 13 15:44:29.094764 containerd[1896]: time="2025-02-13T15:44:29.094708404Z" level=info msg="CreateContainer within sandbox \"e396b8473978663f73fce2e9bf3f11b359558dda1a52cd2b5faa48a0ee7df151\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"275ef43a6e3a15d7bf6a20e7d36bd20b60bd3ebf9bc1866bfa03aa0fb0b2cd69\""
Feb 13 15:44:29.095407 containerd[1896]: time="2025-02-13T15:44:29.095375725Z" level=info msg="StartContainer for \"275ef43a6e3a15d7bf6a20e7d36bd20b60bd3ebf9bc1866bfa03aa0fb0b2cd69\""
Feb 13 15:44:29.145404 systemd[1]: run-containerd-runc-k8s.io-275ef43a6e3a15d7bf6a20e7d36bd20b60bd3ebf9bc1866bfa03aa0fb0b2cd69-runc.qro3Nr.mount: Deactivated successfully.
Feb 13 15:44:29.154062 systemd[1]: Started cri-containerd-275ef43a6e3a15d7bf6a20e7d36bd20b60bd3ebf9bc1866bfa03aa0fb0b2cd69.scope - libcontainer container 275ef43a6e3a15d7bf6a20e7d36bd20b60bd3ebf9bc1866bfa03aa0fb0b2cd69.
Feb 13 15:44:29.197962 containerd[1896]: time="2025-02-13T15:44:29.197876363Z" level=info msg="StartContainer for \"275ef43a6e3a15d7bf6a20e7d36bd20b60bd3ebf9bc1866bfa03aa0fb0b2cd69\" returns successfully"
Feb 13 15:44:29.418383 kubelet[2396]: I0213 15:44:29.418240    2396 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 15:44:29.716557 kubelet[2396]: E0213 15:44:29.716511    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:29.881367 kubelet[2396]: I0213 15:44:29.881322    2396 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Feb 13 15:44:29.881538 kubelet[2396]: I0213 15:44:29.881387    2396 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Feb 13 15:44:29.955948 kernel: bpftool[4784]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Feb 13 15:44:30.472735 systemd-networkd[1817]: vxlan.calico: Link UP
Feb 13 15:44:30.474635 systemd-networkd[1817]: vxlan.calico: Gained carrier
Feb 13 15:44:30.522137 (udev-worker)[4841]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:44:30.528126 (udev-worker)[4838]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:44:30.529300 (udev-worker)[4851]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:44:30.717706 kubelet[2396]: E0213 15:44:30.717663    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:31.718618 kubelet[2396]: E0213 15:44:31.718560    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:32.038207 systemd-networkd[1817]: vxlan.calico: Gained IPv6LL
Feb 13 15:44:32.719468 kubelet[2396]: E0213 15:44:32.719409    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:33.720529 kubelet[2396]: E0213 15:44:33.720469    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:34.363067 ntpd[1878]: Listen normally on 10 vxlan.calico 192.168.19.192:123
Feb 13 15:44:34.363176 ntpd[1878]: Listen normally on 11 vxlan.calico [fe80::64cb:4bff:fedc:6acf%6]:123
Feb 13 15:44:34.363724 ntpd[1878]: 13 Feb 15:44:34 ntpd[1878]: Listen normally on 10 vxlan.calico 192.168.19.192:123
Feb 13 15:44:34.363724 ntpd[1878]: 13 Feb 15:44:34 ntpd[1878]: Listen normally on 11 vxlan.calico [fe80::64cb:4bff:fedc:6acf%6]:123
Feb 13 15:44:34.720993 kubelet[2396]: E0213 15:44:34.720937    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:35.721211 kubelet[2396]: E0213 15:44:35.721146    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:36.540693 kubelet[2396]: I0213 15:44:36.539982    2396 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Feb 13 15:44:36.721551 kubelet[2396]: E0213 15:44:36.721446    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:37.095914 kubelet[2396]: I0213 15:44:37.095814    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wpjns" podStartSLOduration=34.337819701 podStartE2EDuration="49.095775864s" podCreationTimestamp="2025-02-13 15:43:48 +0000 UTC" firstStartedPulling="2025-02-13 15:44:14.304299131 +0000 UTC m=+26.386181408" lastFinishedPulling="2025-02-13 15:44:29.0622553 +0000 UTC m=+41.144137571" observedRunningTime="2025-02-13 15:44:29.593253694 +0000 UTC m=+41.675135986" watchObservedRunningTime="2025-02-13 15:44:37.095775864 +0000 UTC m=+49.177658147"
Feb 13 15:44:37.124881 systemd[1]: Created slice kubepods-besteffort-pod94023715_5d31_4cff_bb09_d925151b06fd.slice - libcontainer container kubepods-besteffort-pod94023715_5d31_4cff_bb09_d925151b06fd.slice.
Feb 13 15:44:37.139987 kubelet[2396]: I0213 15:44:37.138218    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/94023715-5d31-4cff-bb09-d925151b06fd-data\") pod \"nfs-server-provisioner-0\" (UID: \"94023715-5d31-4cff-bb09-d925151b06fd\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:44:37.139987 kubelet[2396]: I0213 15:44:37.138268    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsn59\" (UniqueName: \"kubernetes.io/projected/94023715-5d31-4cff-bb09-d925151b06fd-kube-api-access-dsn59\") pod \"nfs-server-provisioner-0\" (UID: \"94023715-5d31-4cff-bb09-d925151b06fd\") " pod="default/nfs-server-provisioner-0"
Feb 13 15:44:37.441238 containerd[1896]: time="2025-02-13T15:44:37.440964330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:94023715-5d31-4cff-bb09-d925151b06fd,Namespace:default,Attempt:0,}"
Feb 13 15:44:37.722768 kubelet[2396]: E0213 15:44:37.722562    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:37.917902 systemd-networkd[1817]: cali60e51b789ff: Link UP
Feb 13 15:44:37.921548 systemd-networkd[1817]: cali60e51b789ff: Gained carrier
Feb 13 15:44:37.931137 (udev-worker)[4965]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.725 [INFO][4948] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default  94023715-5d31-4cff-bb09-d925151b06fd 1468 0 2025-02-13 15:44:37 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s  172.31.30.13  nfs-server-provisioner-0 eth0 nfs-server-provisioner [] []   [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff  [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.729 [INFO][4948] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.811 [INFO][4958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" HandleID="k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Workload="172.31.30.13-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.833 [INFO][4958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" HandleID="k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Workload="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d56a0), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.13", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 15:44:37.811188175 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.833 [INFO][4958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.833 [INFO][4958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.833 [INFO][4958] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13'
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.841 [INFO][4958] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.850 [INFO][4958] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.859 [INFO][4958] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.863 [INFO][4958] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.871 [INFO][4958] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.871 [INFO][4958] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.875 [INFO][4958] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.885 [INFO][4958] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.901 [INFO][4958] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.196/26] block=192.168.19.192/26 handle="k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.901 [INFO][4958] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.196/26] handle="k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" host="172.31.30.13"
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.901 [INFO][4958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:44:37.965192 containerd[1896]: 2025-02-13 15:44:37.901 [INFO][4958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.196/26] IPv6=[] ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" HandleID="k8s-pod-network.4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Workload="172.31.30.13-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:44:37.967821 containerd[1896]: 2025-02-13 15:44:37.908 [INFO][4948] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"94023715-5d31-4cff-bb09-d925151b06fd", ResourceVersion:"1468", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 44, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:44:37.967821 containerd[1896]: 2025-02-13 15:44:37.910 [INFO][4948] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.196/32] ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:44:37.967821 containerd[1896]: 2025-02-13 15:44:37.910 [INFO][4948] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:44:37.967821 containerd[1896]: 2025-02-13 15:44:37.914 [INFO][4948] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:44:37.968148 containerd[1896]: 2025-02-13 15:44:37.918 [INFO][4948] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"94023715-5d31-4cff-bb09-d925151b06fd", ResourceVersion:"1468", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 44, 37, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.19.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"96:80:f2:9b:e3:b0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:44:37.968148 containerd[1896]: 2025-02-13 15:44:37.953 [INFO][4948] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.30.13-k8s-nfs--server--provisioner--0-eth0"
Feb 13 15:44:38.078965 containerd[1896]: time="2025-02-13T15:44:38.076865580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:44:38.078965 containerd[1896]: time="2025-02-13T15:44:38.077014139Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:44:38.078965 containerd[1896]: time="2025-02-13T15:44:38.077049039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:38.078965 containerd[1896]: time="2025-02-13T15:44:38.077345853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:44:38.115116 systemd[1]: Started cri-containerd-4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1.scope - libcontainer container 4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1.
Feb 13 15:44:38.183282 containerd[1896]: time="2025-02-13T15:44:38.183241887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:94023715-5d31-4cff-bb09-d925151b06fd,Namespace:default,Attempt:0,} returns sandbox id \"4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1\""
Feb 13 15:44:38.187964 containerd[1896]: time="2025-02-13T15:44:38.187929402Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 13 15:44:38.723151 kubelet[2396]: E0213 15:44:38.723090    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:39.398362 systemd-networkd[1817]: cali60e51b789ff: Gained IPv6LL
Feb 13 15:44:39.724914 kubelet[2396]: E0213 15:44:39.724464    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:40.725028 kubelet[2396]: E0213 15:44:40.724991    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:41.725993 kubelet[2396]: E0213 15:44:41.725955    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:41.816962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723357600.mount: Deactivated successfully.
Feb 13 15:44:42.366445 ntpd[1878]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123
Feb 13 15:44:42.366935 ntpd[1878]: 13 Feb 15:44:42 ntpd[1878]: Listen normally on 12 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%9]:123
Feb 13 15:44:42.727327 kubelet[2396]: E0213 15:44:42.727292    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:43.729202 kubelet[2396]: E0213 15:44:43.729160    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:44.659672 containerd[1896]: time="2025-02-13T15:44:44.659614344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:44.665583 containerd[1896]: time="2025-02-13T15:44:44.665519042Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406"
Feb 13 15:44:44.669654 containerd[1896]: time="2025-02-13T15:44:44.669603199Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:44.723070 containerd[1896]: time="2025-02-13T15:44:44.722922952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:44:44.724128 containerd[1896]: time="2025-02-13T15:44:44.723946382Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.53597843s"
Feb 13 15:44:44.724128 containerd[1896]: time="2025-02-13T15:44:44.723989529Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\""
Feb 13 15:44:44.727391 containerd[1896]: time="2025-02-13T15:44:44.727332204Z" level=info msg="CreateContainer within sandbox \"4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 13 15:44:44.729832 kubelet[2396]: E0213 15:44:44.729791    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:44.762960 containerd[1896]: time="2025-02-13T15:44:44.762912938Z" level=info msg="CreateContainer within sandbox \"4a6c511d36cad1aed2f0e9494c2bd18307e3f3fc240f803df50812d5a21ee8e1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"f1d63578fdbcb66bf7aa753b28a9316546686edefca78298adef48f1b4ed3e22\""
Feb 13 15:44:44.763897 containerd[1896]: time="2025-02-13T15:44:44.763857265Z" level=info msg="StartContainer for \"f1d63578fdbcb66bf7aa753b28a9316546686edefca78298adef48f1b4ed3e22\""
Feb 13 15:44:44.819039 systemd[1]: Started cri-containerd-f1d63578fdbcb66bf7aa753b28a9316546686edefca78298adef48f1b4ed3e22.scope - libcontainer container f1d63578fdbcb66bf7aa753b28a9316546686edefca78298adef48f1b4ed3e22.
Feb 13 15:44:44.862975 containerd[1896]: time="2025-02-13T15:44:44.862921596Z" level=info msg="StartContainer for \"f1d63578fdbcb66bf7aa753b28a9316546686edefca78298adef48f1b4ed3e22\" returns successfully"
Feb 13 15:44:45.669020 kubelet[2396]: I0213 15:44:45.668816    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.129918059 podStartE2EDuration="8.668773083s" podCreationTimestamp="2025-02-13 15:44:37 +0000 UTC" firstStartedPulling="2025-02-13 15:44:38.186932851 +0000 UTC m=+50.268815123" lastFinishedPulling="2025-02-13 15:44:44.725787875 +0000 UTC m=+56.807670147" observedRunningTime="2025-02-13 15:44:45.668549074 +0000 UTC m=+57.750431367" watchObservedRunningTime="2025-02-13 15:44:45.668773083 +0000 UTC m=+57.750655376"
Feb 13 15:44:45.730382 kubelet[2396]: E0213 15:44:45.730286    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:46.730680 kubelet[2396]: E0213 15:44:46.730606    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:47.731558 kubelet[2396]: E0213 15:44:47.731493    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:48.665325 kubelet[2396]: E0213 15:44:48.665274    2396 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:48.711457 containerd[1896]: time="2025-02-13T15:44:48.711410441Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\""
Feb 13 15:44:48.713179 containerd[1896]: time="2025-02-13T15:44:48.711544023Z" level=info msg="TearDown network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" successfully"
Feb 13 15:44:48.713179 containerd[1896]: time="2025-02-13T15:44:48.711562571Z" level=info msg="StopPodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" returns successfully"
Feb 13 15:44:48.721882 containerd[1896]: time="2025-02-13T15:44:48.721766536Z" level=info msg="RemovePodSandbox for \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\""
Feb 13 15:44:48.737418 kubelet[2396]: E0213 15:44:48.737359    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:48.759196 containerd[1896]: time="2025-02-13T15:44:48.759116954Z" level=info msg="Forcibly stopping sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\""
Feb 13 15:44:48.760672 containerd[1896]: time="2025-02-13T15:44:48.759774620Z" level=info msg="TearDown network for sandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" successfully"
Feb 13 15:44:48.770466 containerd[1896]: time="2025-02-13T15:44:48.770405557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.770630 containerd[1896]: time="2025-02-13T15:44:48.770499537Z" level=info msg="RemovePodSandbox \"1df1052dfcfa46dcea05d88e3b8d50dfac82d4fb973a472dbd4de76fe2b1d8db\" returns successfully"
Feb 13 15:44:48.771482 containerd[1896]: time="2025-02-13T15:44:48.771446032Z" level=info msg="StopPodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\""
Feb 13 15:44:48.774230 containerd[1896]: time="2025-02-13T15:44:48.774118443Z" level=info msg="TearDown network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" successfully"
Feb 13 15:44:48.774230 containerd[1896]: time="2025-02-13T15:44:48.774146385Z" level=info msg="StopPodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" returns successfully"
Feb 13 15:44:48.774831 containerd[1896]: time="2025-02-13T15:44:48.774666603Z" level=info msg="RemovePodSandbox for \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\""
Feb 13 15:44:48.774831 containerd[1896]: time="2025-02-13T15:44:48.774690950Z" level=info msg="Forcibly stopping sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\""
Feb 13 15:44:48.774831 containerd[1896]: time="2025-02-13T15:44:48.774755625Z" level=info msg="TearDown network for sandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" successfully"
Feb 13 15:44:48.807357 containerd[1896]: time="2025-02-13T15:44:48.807300208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.807651 containerd[1896]: time="2025-02-13T15:44:48.807386516Z" level=info msg="RemovePodSandbox \"88a5cf61a89c8d07a0e6583e15cd66236cfc0865d38d76aa2911169949bf9611\" returns successfully"
Feb 13 15:44:48.807958 containerd[1896]: time="2025-02-13T15:44:48.807926363Z" level=info msg="StopPodSandbox for \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\""
Feb 13 15:44:48.808068 containerd[1896]: time="2025-02-13T15:44:48.808047515Z" level=info msg="TearDown network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\" successfully"
Feb 13 15:44:48.808117 containerd[1896]: time="2025-02-13T15:44:48.808067480Z" level=info msg="StopPodSandbox for \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\" returns successfully"
Feb 13 15:44:48.808431 containerd[1896]: time="2025-02-13T15:44:48.808410011Z" level=info msg="RemovePodSandbox for \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\""
Feb 13 15:44:48.808567 containerd[1896]: time="2025-02-13T15:44:48.808458030Z" level=info msg="Forcibly stopping sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\""
Feb 13 15:44:48.808616 containerd[1896]: time="2025-02-13T15:44:48.808537858Z" level=info msg="TearDown network for sandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\" successfully"
Feb 13 15:44:48.812293 containerd[1896]: time="2025-02-13T15:44:48.812246220Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.812554 containerd[1896]: time="2025-02-13T15:44:48.812303834Z" level=info msg="RemovePodSandbox \"c8536beca7c1adc7380a3d753335601951d908ff9b36659ec5353a8148eef270\" returns successfully"
Feb 13 15:44:48.813362 containerd[1896]: time="2025-02-13T15:44:48.812938842Z" level=info msg="StopPodSandbox for \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\""
Feb 13 15:44:48.813362 containerd[1896]: time="2025-02-13T15:44:48.813267669Z" level=info msg="TearDown network for sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\" successfully"
Feb 13 15:44:48.813362 containerd[1896]: time="2025-02-13T15:44:48.813285838Z" level=info msg="StopPodSandbox for \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\" returns successfully"
Feb 13 15:44:48.814866 containerd[1896]: time="2025-02-13T15:44:48.814836363Z" level=info msg="RemovePodSandbox for \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\""
Feb 13 15:44:48.815139 containerd[1896]: time="2025-02-13T15:44:48.814870059Z" level=info msg="Forcibly stopping sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\""
Feb 13 15:44:48.815219 containerd[1896]: time="2025-02-13T15:44:48.815156849Z" level=info msg="TearDown network for sandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\" successfully"
Feb 13 15:44:48.819890 containerd[1896]: time="2025-02-13T15:44:48.819331208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.819890 containerd[1896]: time="2025-02-13T15:44:48.819398340Z" level=info msg="RemovePodSandbox \"9c81786dc2bcba77be493d89d3c6cb3fd4a262dbc72b27518c3242db3d36158a\" returns successfully"
Feb 13 15:44:48.820312 containerd[1896]: time="2025-02-13T15:44:48.820281981Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:48.820411 containerd[1896]: time="2025-02-13T15:44:48.820392620Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:48.820459 containerd[1896]: time="2025-02-13T15:44:48.820411662Z" level=info msg="StopPodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:48.822832 containerd[1896]: time="2025-02-13T15:44:48.820899961Z" level=info msg="RemovePodSandbox for \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:48.822832 containerd[1896]: time="2025-02-13T15:44:48.820931140Z" level=info msg="Forcibly stopping sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\""
Feb 13 15:44:48.822832 containerd[1896]: time="2025-02-13T15:44:48.821015184Z" level=info msg="TearDown network for sandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" successfully"
Feb 13 15:44:48.830602 containerd[1896]: time="2025-02-13T15:44:48.830553917Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.830756 containerd[1896]: time="2025-02-13T15:44:48.830616963Z" level=info msg="RemovePodSandbox \"98afca001bfa426f7b15db32c6c70330a1107e3025ae014c9a818817e7f17556\" returns successfully"
Feb 13 15:44:48.832424 containerd[1896]: time="2025-02-13T15:44:48.830978661Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:48.832424 containerd[1896]: time="2025-02-13T15:44:48.831112796Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:48.832424 containerd[1896]: time="2025-02-13T15:44:48.831128769Z" level=info msg="StopPodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:48.833391 containerd[1896]: time="2025-02-13T15:44:48.833361874Z" level=info msg="RemovePodSandbox for \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:48.833494 containerd[1896]: time="2025-02-13T15:44:48.833398348Z" level=info msg="Forcibly stopping sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\""
Feb 13 15:44:48.833611 containerd[1896]: time="2025-02-13T15:44:48.833486300Z" level=info msg="TearDown network for sandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" successfully"
Feb 13 15:44:48.861731 containerd[1896]: time="2025-02-13T15:44:48.861621513Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.862029 containerd[1896]: time="2025-02-13T15:44:48.861897951Z" level=info msg="RemovePodSandbox \"6eaf4246a3d5e7e3165b3333314936cdddcf2d034115976cf86b314f099da07b\" returns successfully"
Feb 13 15:44:48.863602 containerd[1896]: time="2025-02-13T15:44:48.863373766Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:48.864110 containerd[1896]: time="2025-02-13T15:44:48.863981390Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:48.864279 containerd[1896]: time="2025-02-13T15:44:48.864042031Z" level=info msg="StopPodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:48.865856 containerd[1896]: time="2025-02-13T15:44:48.865740829Z" level=info msg="RemovePodSandbox for \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:48.865856 containerd[1896]: time="2025-02-13T15:44:48.865849997Z" level=info msg="Forcibly stopping sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\""
Feb 13 15:44:48.866235 containerd[1896]: time="2025-02-13T15:44:48.865978116Z" level=info msg="TearDown network for sandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" successfully"
Feb 13 15:44:48.872389 containerd[1896]: time="2025-02-13T15:44:48.872346943Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.872540 containerd[1896]: time="2025-02-13T15:44:48.872419469Z" level=info msg="RemovePodSandbox \"35005c0d515a0eaddf1296b48287bf2fc564795686b923d26c7167822c461e34\" returns successfully"
Feb 13 15:44:48.872941 containerd[1896]: time="2025-02-13T15:44:48.872883586Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:48.873054 containerd[1896]: time="2025-02-13T15:44:48.872995946Z" level=info msg="TearDown network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" successfully"
Feb 13 15:44:48.873277 containerd[1896]: time="2025-02-13T15:44:48.873051060Z" level=info msg="StopPodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" returns successfully"
Feb 13 15:44:48.873848 containerd[1896]: time="2025-02-13T15:44:48.873668690Z" level=info msg="RemovePodSandbox for \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:48.873848 containerd[1896]: time="2025-02-13T15:44:48.873715338Z" level=info msg="Forcibly stopping sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\""
Feb 13 15:44:48.873978 containerd[1896]: time="2025-02-13T15:44:48.873887940Z" level=info msg="TearDown network for sandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" successfully"
Feb 13 15:44:48.879686 containerd[1896]: time="2025-02-13T15:44:48.879517675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.879686 containerd[1896]: time="2025-02-13T15:44:48.879587251Z" level=info msg="RemovePodSandbox \"b21fa514b2a2e042658f887a5d7d282c7a1ac15f6face81d510765827a1e1297\" returns successfully"
Feb 13 15:44:48.880144 containerd[1896]: time="2025-02-13T15:44:48.880118992Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\""
Feb 13 15:44:48.880370 containerd[1896]: time="2025-02-13T15:44:48.880331761Z" level=info msg="TearDown network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" successfully"
Feb 13 15:44:48.880370 containerd[1896]: time="2025-02-13T15:44:48.880350746Z" level=info msg="StopPodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" returns successfully"
Feb 13 15:44:48.880928 containerd[1896]: time="2025-02-13T15:44:48.880894496Z" level=info msg="RemovePodSandbox for \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\""
Feb 13 15:44:48.881028 containerd[1896]: time="2025-02-13T15:44:48.880931056Z" level=info msg="Forcibly stopping sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\""
Feb 13 15:44:48.881155 containerd[1896]: time="2025-02-13T15:44:48.881045780Z" level=info msg="TearDown network for sandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" successfully"
Feb 13 15:44:48.886664 containerd[1896]: time="2025-02-13T15:44:48.886230750Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.886664 containerd[1896]: time="2025-02-13T15:44:48.886295674Z" level=info msg="RemovePodSandbox \"4227299146889fc5eacd5b01a7ebae841b87836f7462cf3119f2502d492cf188\" returns successfully"
Feb 13 15:44:48.888087 containerd[1896]: time="2025-02-13T15:44:48.887213368Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\""
Feb 13 15:44:48.888087 containerd[1896]: time="2025-02-13T15:44:48.887907733Z" level=info msg="TearDown network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" successfully"
Feb 13 15:44:48.888087 containerd[1896]: time="2025-02-13T15:44:48.887930804Z" level=info msg="StopPodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" returns successfully"
Feb 13 15:44:48.888484 containerd[1896]: time="2025-02-13T15:44:48.888440901Z" level=info msg="RemovePodSandbox for \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\""
Feb 13 15:44:48.888569 containerd[1896]: time="2025-02-13T15:44:48.888487722Z" level=info msg="Forcibly stopping sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\""
Feb 13 15:44:48.888644 containerd[1896]: time="2025-02-13T15:44:48.888575410Z" level=info msg="TearDown network for sandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" successfully"
Feb 13 15:44:48.894851 containerd[1896]: time="2025-02-13T15:44:48.894786859Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.895612 containerd[1896]: time="2025-02-13T15:44:48.895092095Z" level=info msg="RemovePodSandbox \"9319c94ca13030aa9d75515f11a6a27129f26b3a540c8fc9b3ff08bb8b690689\" returns successfully"
Feb 13 15:44:48.895874 containerd[1896]: time="2025-02-13T15:44:48.895810707Z" level=info msg="StopPodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\""
Feb 13 15:44:48.896109 containerd[1896]: time="2025-02-13T15:44:48.896080569Z" level=info msg="TearDown network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" successfully"
Feb 13 15:44:48.896109 containerd[1896]: time="2025-02-13T15:44:48.896102392Z" level=info msg="StopPodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" returns successfully"
Feb 13 15:44:48.896647 containerd[1896]: time="2025-02-13T15:44:48.896541354Z" level=info msg="RemovePodSandbox for \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\""
Feb 13 15:44:48.896647 containerd[1896]: time="2025-02-13T15:44:48.896630833Z" level=info msg="Forcibly stopping sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\""
Feb 13 15:44:48.896777 containerd[1896]: time="2025-02-13T15:44:48.896721106Z" level=info msg="TearDown network for sandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" successfully"
Feb 13 15:44:48.902831 containerd[1896]: time="2025-02-13T15:44:48.902707945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.903091 containerd[1896]: time="2025-02-13T15:44:48.902903584Z" level=info msg="RemovePodSandbox \"47db03c8528aa2564696e560f597e1d6891193e02ca9fb59d80b465ed11f0300\" returns successfully"
Feb 13 15:44:48.904155 containerd[1896]: time="2025-02-13T15:44:48.904125447Z" level=info msg="StopPodSandbox for \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\""
Feb 13 15:44:48.904273 containerd[1896]: time="2025-02-13T15:44:48.904246824Z" level=info msg="TearDown network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\" successfully"
Feb 13 15:44:48.904349 containerd[1896]: time="2025-02-13T15:44:48.904267464Z" level=info msg="StopPodSandbox for \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\" returns successfully"
Feb 13 15:44:48.906320 containerd[1896]: time="2025-02-13T15:44:48.906248692Z" level=info msg="RemovePodSandbox for \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\""
Feb 13 15:44:48.906320 containerd[1896]: time="2025-02-13T15:44:48.906286598Z" level=info msg="Forcibly stopping sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\""
Feb 13 15:44:48.906587 containerd[1896]: time="2025-02-13T15:44:48.906384239Z" level=info msg="TearDown network for sandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\" successfully"
Feb 13 15:44:48.912225 containerd[1896]: time="2025-02-13T15:44:48.912069207Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.912225 containerd[1896]: time="2025-02-13T15:44:48.912129473Z" level=info msg="RemovePodSandbox \"6739fff5b3d87914144e7ed2d37aaa58354cbe54cbd85331910bc79e96a35d44\" returns successfully"
Feb 13 15:44:48.912604 containerd[1896]: time="2025-02-13T15:44:48.912574709Z" level=info msg="StopPodSandbox for \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\""
Feb 13 15:44:48.914856 containerd[1896]: time="2025-02-13T15:44:48.914813612Z" level=info msg="TearDown network for sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\" successfully"
Feb 13 15:44:48.914856 containerd[1896]: time="2025-02-13T15:44:48.914846054Z" level=info msg="StopPodSandbox for \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\" returns successfully"
Feb 13 15:44:48.915581 containerd[1896]: time="2025-02-13T15:44:48.915478153Z" level=info msg="RemovePodSandbox for \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\""
Feb 13 15:44:48.915581 containerd[1896]: time="2025-02-13T15:44:48.915514667Z" level=info msg="Forcibly stopping sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\""
Feb 13 15:44:48.915701 containerd[1896]: time="2025-02-13T15:44:48.915642044Z" level=info msg="TearDown network for sandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\" successfully"
Feb 13 15:44:48.923325 containerd[1896]: time="2025-02-13T15:44:48.923271136Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.923606 containerd[1896]: time="2025-02-13T15:44:48.923339370Z" level=info msg="RemovePodSandbox \"2d92d0afdbd9c76e4596d872d0c8cd83c9b34d058b1be61916ad7fd286d5c624\" returns successfully"
Feb 13 15:44:48.923955 containerd[1896]: time="2025-02-13T15:44:48.923925097Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\""
Feb 13 15:44:48.924090 containerd[1896]: time="2025-02-13T15:44:48.924069242Z" level=info msg="TearDown network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" successfully"
Feb 13 15:44:48.924139 containerd[1896]: time="2025-02-13T15:44:48.924087229Z" level=info msg="StopPodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" returns successfully"
Feb 13 15:44:48.924581 containerd[1896]: time="2025-02-13T15:44:48.924555942Z" level=info msg="RemovePodSandbox for \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\""
Feb 13 15:44:48.924664 containerd[1896]: time="2025-02-13T15:44:48.924584247Z" level=info msg="Forcibly stopping sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\""
Feb 13 15:44:48.924730 containerd[1896]: time="2025-02-13T15:44:48.924671505Z" level=info msg="TearDown network for sandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" successfully"
Feb 13 15:44:48.932143 containerd[1896]: time="2025-02-13T15:44:48.932101592Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.932459 containerd[1896]: time="2025-02-13T15:44:48.932170127Z" level=info msg="RemovePodSandbox \"db51172a800c1adb687d4121d2ec42d9aa54a58a68013c4c0280ad7511b6bafa\" returns successfully"
Feb 13 15:44:48.932927 containerd[1896]: time="2025-02-13T15:44:48.932896370Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\""
Feb 13 15:44:48.933040 containerd[1896]: time="2025-02-13T15:44:48.933020562Z" level=info msg="TearDown network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" successfully"
Feb 13 15:44:48.933087 containerd[1896]: time="2025-02-13T15:44:48.933039737Z" level=info msg="StopPodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" returns successfully"
Feb 13 15:44:48.933747 containerd[1896]: time="2025-02-13T15:44:48.933718570Z" level=info msg="RemovePodSandbox for \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\""
Feb 13 15:44:48.933935 containerd[1896]: time="2025-02-13T15:44:48.933748613Z" level=info msg="Forcibly stopping sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\""
Feb 13 15:44:48.933991 containerd[1896]: time="2025-02-13T15:44:48.933933808Z" level=info msg="TearDown network for sandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" successfully"
Feb 13 15:44:48.939095 containerd[1896]: time="2025-02-13T15:44:48.939049396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.939358 containerd[1896]: time="2025-02-13T15:44:48.939110727Z" level=info msg="RemovePodSandbox \"55da48b26bc30cfa7260af1475a550e283fb097ac1f85f25fe65770b53672329\" returns successfully"
Feb 13 15:44:48.940292 containerd[1896]: time="2025-02-13T15:44:48.940259154Z" level=info msg="StopPodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\""
Feb 13 15:44:48.940411 containerd[1896]: time="2025-02-13T15:44:48.940386501Z" level=info msg="TearDown network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" successfully"
Feb 13 15:44:48.940470 containerd[1896]: time="2025-02-13T15:44:48.940408554Z" level=info msg="StopPodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" returns successfully"
Feb 13 15:44:48.941128 containerd[1896]: time="2025-02-13T15:44:48.941065405Z" level=info msg="RemovePodSandbox for \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\""
Feb 13 15:44:48.941128 containerd[1896]: time="2025-02-13T15:44:48.941098281Z" level=info msg="Forcibly stopping sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\""
Feb 13 15:44:48.941242 containerd[1896]: time="2025-02-13T15:44:48.941183341Z" level=info msg="TearDown network for sandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" successfully"
Feb 13 15:44:48.946909 containerd[1896]: time="2025-02-13T15:44:48.946867781Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.947098 containerd[1896]: time="2025-02-13T15:44:48.946998642Z" level=info msg="RemovePodSandbox \"656cb9cd83dfd3cffb8e8f43761bf28694ef337aa250f9c277f46916222a15cc\" returns successfully"
Feb 13 15:44:48.947532 containerd[1896]: time="2025-02-13T15:44:48.947506779Z" level=info msg="StopPodSandbox for \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\""
Feb 13 15:44:48.947639 containerd[1896]: time="2025-02-13T15:44:48.947617797Z" level=info msg="TearDown network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\" successfully"
Feb 13 15:44:48.947685 containerd[1896]: time="2025-02-13T15:44:48.947637555Z" level=info msg="StopPodSandbox for \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\" returns successfully"
Feb 13 15:44:48.949207 containerd[1896]: time="2025-02-13T15:44:48.947989263Z" level=info msg="RemovePodSandbox for \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\""
Feb 13 15:44:48.949207 containerd[1896]: time="2025-02-13T15:44:48.948020326Z" level=info msg="Forcibly stopping sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\""
Feb 13 15:44:48.949207 containerd[1896]: time="2025-02-13T15:44:48.948109891Z" level=info msg="TearDown network for sandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\" successfully"
Feb 13 15:44:48.953625 containerd[1896]: time="2025-02-13T15:44:48.953580651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.953750 containerd[1896]: time="2025-02-13T15:44:48.953647535Z" level=info msg="RemovePodSandbox \"d374608d28f97501de3648300d424ed8c7440f3cb2896d7898f11c9805c0b5db\" returns successfully"
Feb 13 15:44:48.954368 containerd[1896]: time="2025-02-13T15:44:48.954340316Z" level=info msg="StopPodSandbox for \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\""
Feb 13 15:44:48.954481 containerd[1896]: time="2025-02-13T15:44:48.954460379Z" level=info msg="TearDown network for sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\" successfully"
Feb 13 15:44:48.954737 containerd[1896]: time="2025-02-13T15:44:48.954480190Z" level=info msg="StopPodSandbox for \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\" returns successfully"
Feb 13 15:44:48.955327 containerd[1896]: time="2025-02-13T15:44:48.955295760Z" level=info msg="RemovePodSandbox for \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\""
Feb 13 15:44:48.955417 containerd[1896]: time="2025-02-13T15:44:48.955328786Z" level=info msg="Forcibly stopping sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\""
Feb 13 15:44:48.955465 containerd[1896]: time="2025-02-13T15:44:48.955410961Z" level=info msg="TearDown network for sandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\" successfully"
Feb 13 15:44:48.961163 containerd[1896]: time="2025-02-13T15:44:48.961117451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.961376 containerd[1896]: time="2025-02-13T15:44:48.961284958Z" level=info msg="RemovePodSandbox \"ec7fa9dbbdea58e8ef16d6eefb04a24dfbaa5dd7ecb49e8325526f1bb2d43b75\" returns successfully"
Feb 13 15:44:48.962106 containerd[1896]: time="2025-02-13T15:44:48.962071944Z" level=info msg="StopPodSandbox for \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\""
Feb 13 15:44:48.962209 containerd[1896]: time="2025-02-13T15:44:48.962171292Z" level=info msg="TearDown network for sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" successfully"
Feb 13 15:44:48.962209 containerd[1896]: time="2025-02-13T15:44:48.962186225Z" level=info msg="StopPodSandbox for \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" returns successfully"
Feb 13 15:44:48.962512 containerd[1896]: time="2025-02-13T15:44:48.962486637Z" level=info msg="RemovePodSandbox for \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\""
Feb 13 15:44:48.962589 containerd[1896]: time="2025-02-13T15:44:48.962513427Z" level=info msg="Forcibly stopping sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\""
Feb 13 15:44:48.962630 containerd[1896]: time="2025-02-13T15:44:48.962575827Z" level=info msg="TearDown network for sandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" successfully"
Feb 13 15:44:48.968500 containerd[1896]: time="2025-02-13T15:44:48.968448350Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 15:44:48.968672 containerd[1896]: time="2025-02-13T15:44:48.968511856Z" level=info msg="RemovePodSandbox \"e8a1ba7cc6a1527825515b4c94ba70d92febff98b58a879c8ae974daab3874fe\" returns successfully"
Feb 13 15:44:49.743930 kubelet[2396]: E0213 15:44:49.743883    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:50.745143 kubelet[2396]: E0213 15:44:50.745087    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:51.746209 kubelet[2396]: E0213 15:44:51.745759    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:52.746633 kubelet[2396]: E0213 15:44:52.746579    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:53.747276 kubelet[2396]: E0213 15:44:53.747228    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:54.747737 kubelet[2396]: E0213 15:44:54.747683    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:55.748590 kubelet[2396]: E0213 15:44:55.748527    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:56.749327 kubelet[2396]: E0213 15:44:56.749218    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:57.750085 kubelet[2396]: E0213 15:44:57.750028    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:58.750360 kubelet[2396]: E0213 15:44:58.750317    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:44:59.751175 kubelet[2396]: E0213 15:44:59.751123    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:00.751871 kubelet[2396]: E0213 15:45:00.751815    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:01.752502 kubelet[2396]: E0213 15:45:01.752448    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:02.753269 kubelet[2396]: E0213 15:45:02.753209    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:03.753989 kubelet[2396]: E0213 15:45:03.753943    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:04.754777 kubelet[2396]: E0213 15:45:04.754718    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:05.755264 kubelet[2396]: E0213 15:45:05.755202    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:06.755724 kubelet[2396]: E0213 15:45:06.755679    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:07.756236 kubelet[2396]: E0213 15:45:07.756177    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:08.666049 kubelet[2396]: E0213 15:45:08.665987    2396 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:08.756654 kubelet[2396]: E0213 15:45:08.756601    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:09.259345 systemd[1]: Created slice kubepods-besteffort-pod803dee53_dfa3_400a_ae0b_a63c82a7db52.slice - libcontainer container kubepods-besteffort-pod803dee53_dfa3_400a_ae0b_a63c82a7db52.slice.
Feb 13 15:45:09.303633 kubelet[2396]: I0213 15:45:09.303260    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5112db43-cf4e-486c-93da-d82a70b411c4\" (UniqueName: \"kubernetes.io/nfs/803dee53-dfa3-400a-ae0b-a63c82a7db52-pvc-5112db43-cf4e-486c-93da-d82a70b411c4\") pod \"test-pod-1\" (UID: \"803dee53-dfa3-400a-ae0b-a63c82a7db52\") " pod="default/test-pod-1"
Feb 13 15:45:09.303633 kubelet[2396]: I0213 15:45:09.303325    2396 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bskkp\" (UniqueName: \"kubernetes.io/projected/803dee53-dfa3-400a-ae0b-a63c82a7db52-kube-api-access-bskkp\") pod \"test-pod-1\" (UID: \"803dee53-dfa3-400a-ae0b-a63c82a7db52\") " pod="default/test-pod-1"
Feb 13 15:45:09.524848 kernel: FS-Cache: Loaded
Feb 13 15:45:09.673069 kernel: RPC: Registered named UNIX socket transport module.
Feb 13 15:45:09.674222 kernel: RPC: Registered udp transport module.
Feb 13 15:45:09.674271 kernel: RPC: Registered tcp transport module.
Feb 13 15:45:09.674302 kernel: RPC: Registered tcp-with-tls transport module.
Feb 13 15:45:09.674526 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 13 15:45:09.756921 kubelet[2396]: E0213 15:45:09.756876    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:10.084216 kernel: NFS: Registering the id_resolver key type
Feb 13 15:45:10.084357 kernel: Key type id_resolver registered
Feb 13 15:45:10.084398 kernel: Key type id_legacy registered
Feb 13 15:45:10.162113 nfsidmap[5218]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb 13 15:45:10.169671 nfsidmap[5219]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Feb 13 15:45:10.475521 containerd[1896]: time="2025-02-13T15:45:10.475474170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:803dee53-dfa3-400a-ae0b-a63c82a7db52,Namespace:default,Attempt:0,}"
Feb 13 15:45:10.757517 kubelet[2396]: E0213 15:45:10.757383    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:10.804066 (udev-worker)[5215]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 15:45:10.805128 systemd-networkd[1817]: cali5ec59c6bf6e: Link UP
Feb 13 15:45:10.805418 systemd-networkd[1817]: cali5ec59c6bf6e: Gained carrier
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.583 [INFO][5220] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.30.13-k8s-test--pod--1-eth0  default  803dee53-dfa3-400a-ae0b-a63c82a7db52 1585 0 2025-02-13 15:44:39 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  172.31.30.13  test-pod-1 eth0 default [] []   [kns.default ksa.default.default] cali5ec59c6bf6e  [] []}} ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.583 [INFO][5220] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.699 [INFO][5231] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" HandleID="k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Workload="172.31.30.13-k8s-test--pod--1-eth0"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.732 [INFO][5231] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" HandleID="k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Workload="172.31.30.13-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000336d90), Attrs:map[string]string{"namespace":"default", "node":"172.31.30.13", "pod":"test-pod-1", "timestamp":"2025-02-13 15:45:10.699476299 +0000 UTC"}, Hostname:"172.31.30.13", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.732 [INFO][5231] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.732 [INFO][5231] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.732 [INFO][5231] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.30.13'
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.742 [INFO][5231] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.753 [INFO][5231] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.763 [INFO][5231] ipam/ipam.go 489: Trying affinity for 192.168.19.192/26 host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.767 [INFO][5231] ipam/ipam.go 155: Attempting to load block cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.773 [INFO][5231] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.19.192/26 host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.773 [INFO][5231] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.19.192/26 handle="k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.775 [INFO][5231] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.786 [INFO][5231] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.19.192/26 handle="k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.798 [INFO][5231] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.19.197/26] block=192.168.19.192/26 handle="k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.798 [INFO][5231] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.19.197/26] handle="k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" host="172.31.30.13"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.798 [INFO][5231] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.798 [INFO][5231] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.19.197/26] IPv6=[] ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" HandleID="k8s-pod-network.e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Workload="172.31.30.13-k8s-test--pod--1-eth0"
Feb 13 15:45:10.830967 containerd[1896]: 2025-02-13 15:45:10.800 [INFO][5220] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"803dee53-dfa3-400a-ae0b-a63c82a7db52", ResourceVersion:"1585", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 44, 39, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:45:10.836528 containerd[1896]: 2025-02-13 15:45:10.800 [INFO][5220] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.19.197/32] ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0"
Feb 13 15:45:10.836528 containerd[1896]: 2025-02-13 15:45:10.800 [INFO][5220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0"
Feb 13 15:45:10.836528 containerd[1896]: 2025-02-13 15:45:10.805 [INFO][5220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0"
Feb 13 15:45:10.836528 containerd[1896]: 2025-02-13 15:45:10.806 [INFO][5220] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.30.13-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"803dee53-dfa3-400a-ae0b-a63c82a7db52", ResourceVersion:"1585", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 15, 44, 39, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.30.13", ContainerID:"e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.19.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"32:0b:51:bf:7a:53", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 15:45:10.836528 containerd[1896]: 2025-02-13 15:45:10.821 [INFO][5220] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.30.13-k8s-test--pod--1-eth0"
Feb 13 15:45:10.882369 containerd[1896]: time="2025-02-13T15:45:10.882265922Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 15:45:10.882369 containerd[1896]: time="2025-02-13T15:45:10.882328163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 15:45:10.885890 containerd[1896]: time="2025-02-13T15:45:10.882343971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:45:10.885890 containerd[1896]: time="2025-02-13T15:45:10.882446586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 15:45:10.932104 systemd[1]: Started cri-containerd-e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098.scope - libcontainer container e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098.
Feb 13 15:45:10.996428 containerd[1896]: time="2025-02-13T15:45:10.996388380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:803dee53-dfa3-400a-ae0b-a63c82a7db52,Namespace:default,Attempt:0,} returns sandbox id \"e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098\""
Feb 13 15:45:10.998955 containerd[1896]: time="2025-02-13T15:45:10.998526759Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 15:45:11.346816 containerd[1896]: time="2025-02-13T15:45:11.346749423Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 15:45:11.348319 containerd[1896]: time="2025-02-13T15:45:11.348245801Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Feb 13 15:45:11.359417 containerd[1896]: time="2025-02-13T15:45:11.359072308Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 360.505174ms"
Feb 13 15:45:11.359417 containerd[1896]: time="2025-02-13T15:45:11.359412570Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\""
Feb 13 15:45:11.375415 containerd[1896]: time="2025-02-13T15:45:11.374165410Z" level=info msg="CreateContainer within sandbox \"e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 13 15:45:11.409047 containerd[1896]: time="2025-02-13T15:45:11.408991218Z" level=info msg="CreateContainer within sandbox \"e5a229e8a9d7c5a337322f04c1035ea8495d0aea8437155337046dfaeb2c8098\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"a058dbe5f1e10fd3e2a6afb5c87323416b05eb40fdefdc0b40a1590613be9dc2\""
Feb 13 15:45:11.418885 containerd[1896]: time="2025-02-13T15:45:11.418421967Z" level=info msg="StartContainer for \"a058dbe5f1e10fd3e2a6afb5c87323416b05eb40fdefdc0b40a1590613be9dc2\""
Feb 13 15:45:11.493322 systemd[1]: Started cri-containerd-a058dbe5f1e10fd3e2a6afb5c87323416b05eb40fdefdc0b40a1590613be9dc2.scope - libcontainer container a058dbe5f1e10fd3e2a6afb5c87323416b05eb40fdefdc0b40a1590613be9dc2.
Feb 13 15:45:11.552062 containerd[1896]: time="2025-02-13T15:45:11.552013812Z" level=info msg="StartContainer for \"a058dbe5f1e10fd3e2a6afb5c87323416b05eb40fdefdc0b40a1590613be9dc2\" returns successfully"
Feb 13 15:45:11.728892 kubelet[2396]: I0213 15:45:11.728715    2396 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=32.361649725 podStartE2EDuration="32.728691171s" podCreationTimestamp="2025-02-13 15:44:39 +0000 UTC" firstStartedPulling="2025-02-13 15:45:10.99806895 +0000 UTC m=+83.079951233" lastFinishedPulling="2025-02-13 15:45:11.365110398 +0000 UTC m=+83.446992679" observedRunningTime="2025-02-13 15:45:11.728288843 +0000 UTC m=+83.810171134" watchObservedRunningTime="2025-02-13 15:45:11.728691171 +0000 UTC m=+83.810573465"
Feb 13 15:45:11.758624 kubelet[2396]: E0213 15:45:11.758563    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:12.614149 systemd-networkd[1817]: cali5ec59c6bf6e: Gained IPv6LL
Feb 13 15:45:12.758969 kubelet[2396]: E0213 15:45:12.758894    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:13.759949 kubelet[2396]: E0213 15:45:13.759876    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:14.760117 kubelet[2396]: E0213 15:45:14.760057    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 15:45:15.363061 ntpd[1878]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123
Feb 13 15:45:15.363572 ntpd[1878]: 13 Feb 15:45:15 ntpd[1878]: Listen normally on 13 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%10]:123
Feb 13 15:45:15.760505 kubelet[2396]: E0213 15:45:15.760446    2396 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"