Feb 13 18:51:38.187652 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083]
Feb 13 18:51:38.187704 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu Feb 13 17:29:42 -00 2025
Feb 13 18:51:38.187730 kernel: KASLR disabled due to lack of seed
Feb 13 18:51:38.187746 kernel: efi: EFI v2.7 by EDK II
Feb 13 18:51:38.187762 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a736a98 MEMRESERVE=0x78557598 
Feb 13 18:51:38.187777 kernel: secureboot: Secure boot disabled
Feb 13 18:51:38.187795 kernel: ACPI: Early table checksum verification disabled
Feb 13 18:51:38.187810 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON)
Feb 13 18:51:38.187848 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001      01000013)
Feb 13 18:51:38.187867 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001)
Feb 13 18:51:38.187890 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527)
Feb 13 18:51:38.187906 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001)
Feb 13 18:51:38.187925 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001)
Feb 13 18:51:38.187940 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001)
Feb 13 18:51:38.187958 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001)
Feb 13 18:51:38.188004 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001)
Feb 13 18:51:38.188023 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001)
Feb 13 18:51:38.188040 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001)
Feb 13 18:51:38.188056 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200
Feb 13 18:51:38.188073 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200')
Feb 13 18:51:38.188089 kernel: printk: bootconsole [uart0] enabled
Feb 13 18:51:38.188106 kernel: NUMA: Failed to initialise from firmware
Feb 13 18:51:38.188122 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff]
Feb 13 18:51:38.188139 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff]
Feb 13 18:51:38.188154 kernel: Zone ranges:
Feb 13 18:51:38.188171 kernel:   DMA      [mem 0x0000000040000000-0x00000000ffffffff]
Feb 13 18:51:38.188192 kernel:   DMA32    empty
Feb 13 18:51:38.188209 kernel:   Normal   [mem 0x0000000100000000-0x00000004b5ffffff]
Feb 13 18:51:38.188225 kernel: Movable zone start for each node
Feb 13 18:51:38.188241 kernel: Early memory node ranges
Feb 13 18:51:38.188257 kernel:   node   0: [mem 0x0000000040000000-0x000000007862ffff]
Feb 13 18:51:38.188273 kernel:   node   0: [mem 0x0000000078630000-0x000000007863ffff]
Feb 13 18:51:38.188289 kernel:   node   0: [mem 0x0000000078640000-0x00000000786effff]
Feb 13 18:51:38.188305 kernel:   node   0: [mem 0x00000000786f0000-0x000000007872ffff]
Feb 13 18:51:38.188321 kernel:   node   0: [mem 0x0000000078730000-0x000000007bbfffff]
Feb 13 18:51:38.188337 kernel:   node   0: [mem 0x000000007bc00000-0x000000007bfdffff]
Feb 13 18:51:38.188353 kernel:   node   0: [mem 0x000000007bfe0000-0x000000007fffffff]
Feb 13 18:51:38.188369 kernel:   node   0: [mem 0x0000000400000000-0x00000004b5ffffff]
Feb 13 18:51:38.188390 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff]
Feb 13 18:51:38.188407 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges
Feb 13 18:51:38.188430 kernel: psci: probing for conduit method from ACPI.
Feb 13 18:51:38.188447 kernel: psci: PSCIv1.0 detected in firmware.
Feb 13 18:51:38.188464 kernel: psci: Using standard PSCI v0.2 function IDs
Feb 13 18:51:38.188485 kernel: psci: Trusted OS migration not required
Feb 13 18:51:38.188502 kernel: psci: SMC Calling Convention v1.1
Feb 13 18:51:38.188519 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Feb 13 18:51:38.188536 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Feb 13 18:51:38.188553 kernel: pcpu-alloc: [0] 0 [0] 1 
Feb 13 18:51:38.188570 kernel: Detected PIPT I-cache on CPU0
Feb 13 18:51:38.188587 kernel: CPU features: detected: GIC system register CPU interface
Feb 13 18:51:38.188604 kernel: CPU features: detected: Spectre-v2
Feb 13 18:51:38.188620 kernel: CPU features: detected: Spectre-v3a
Feb 13 18:51:38.188637 kernel: CPU features: detected: Spectre-BHB
Feb 13 18:51:38.188653 kernel: CPU features: detected: ARM erratum 1742098
Feb 13 18:51:38.188670 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923
Feb 13 18:51:38.188691 kernel: alternatives: applying boot alternatives
Feb 13 18:51:38.188710 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b
Feb 13 18:51:38.188728 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Feb 13 18:51:38.188746 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Feb 13 18:51:38.188763 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Feb 13 18:51:38.188779 kernel: Fallback order for Node 0: 0 
Feb 13 18:51:38.188796 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 991872
Feb 13 18:51:38.188813 kernel: Policy zone: Normal
Feb 13 18:51:38.188830 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Feb 13 18:51:38.188846 kernel: software IO TLB: area num 2.
Feb 13 18:51:38.188868 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB)
Feb 13 18:51:38.188886 kernel: Memory: 3819640K/4030464K available (10304K kernel code, 2186K rwdata, 8092K rodata, 39936K init, 897K bss, 210824K reserved, 0K cma-reserved)
Feb 13 18:51:38.188903 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1
Feb 13 18:51:38.188920 kernel: rcu: Preemptible hierarchical RCU implementation.
Feb 13 18:51:38.188938 kernel: rcu:         RCU event tracing is enabled.
Feb 13 18:51:38.188955 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2.
Feb 13 18:51:38.191038 kernel:         Trampoline variant of Tasks RCU enabled.
Feb 13 18:51:38.191069 kernel:         Tracing variant of Tasks RCU enabled.
Feb 13 18:51:38.191088 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Feb 13 18:51:38.191106 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2
Feb 13 18:51:38.191123 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Feb 13 18:51:38.191155 kernel: GICv3: 96 SPIs implemented
Feb 13 18:51:38.191173 kernel: GICv3: 0 Extended SPIs implemented
Feb 13 18:51:38.191190 kernel: Root IRQ handler: gic_handle_irq
Feb 13 18:51:38.191209 kernel: GICv3: GICv3 features: 16 PPIs
Feb 13 18:51:38.191229 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000
Feb 13 18:51:38.191248 kernel: ITS [mem 0x10080000-0x1009ffff]
Feb 13 18:51:38.191266 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1)
Feb 13 18:51:38.191285 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1)
Feb 13 18:51:38.191302 kernel: GICv3: using LPI property table @0x00000004000d0000
Feb 13 18:51:38.191320 kernel: ITS: Using hypervisor restricted LPI range [128]
Feb 13 18:51:38.191337 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000
Feb 13 18:51:38.191355 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Feb 13 18:51:38.191378 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt).
Feb 13 18:51:38.191396 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns
Feb 13 18:51:38.191414 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns
Feb 13 18:51:38.191432 kernel: Console: colour dummy device 80x25
Feb 13 18:51:38.191450 kernel: printk: console [tty1] enabled
Feb 13 18:51:38.191468 kernel: ACPI: Core revision 20230628
Feb 13 18:51:38.191486 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333)
Feb 13 18:51:38.191505 kernel: pid_max: default: 32768 minimum: 301
Feb 13 18:51:38.191522 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Feb 13 18:51:38.191540 kernel: landlock: Up and running.
Feb 13 18:51:38.191563 kernel: SELinux:  Initializing.
Feb 13 18:51:38.191580 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 18:51:38.191597 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Feb 13 18:51:38.191615 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 18:51:38.191632 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2.
Feb 13 18:51:38.191650 kernel: rcu: Hierarchical SRCU implementation.
Feb 13 18:51:38.191667 kernel: rcu:         Max phase no-delay instances is 400.
Feb 13 18:51:38.191684 kernel: Platform MSI: ITS@0x10080000 domain created
Feb 13 18:51:38.191706 kernel: PCI/MSI: ITS@0x10080000 domain created
Feb 13 18:51:38.191723 kernel: Remapping and enabling EFI services.
Feb 13 18:51:38.191740 kernel: smp: Bringing up secondary CPUs ...
Feb 13 18:51:38.191757 kernel: Detected PIPT I-cache on CPU1
Feb 13 18:51:38.191774 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000
Feb 13 18:51:38.191792 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000
Feb 13 18:51:38.191809 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083]
Feb 13 18:51:38.191849 kernel: smp: Brought up 1 node, 2 CPUs
Feb 13 18:51:38.191870 kernel: SMP: Total of 2 processors activated.
Feb 13 18:51:38.191887 kernel: CPU features: detected: 32-bit EL0 Support
Feb 13 18:51:38.191912 kernel: CPU features: detected: 32-bit EL1 Support
Feb 13 18:51:38.191929 kernel: CPU features: detected: CRC32 instructions
Feb 13 18:51:38.191958 kernel: CPU: All CPU(s) started at EL1
Feb 13 18:51:38.192017 kernel: alternatives: applying system-wide alternatives
Feb 13 18:51:38.192035 kernel: devtmpfs: initialized
Feb 13 18:51:38.192054 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Feb 13 18:51:38.192072 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear)
Feb 13 18:51:38.192089 kernel: pinctrl core: initialized pinctrl subsystem
Feb 13 18:51:38.192108 kernel: SMBIOS 3.0.0 present.
Feb 13 18:51:38.192133 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018
Feb 13 18:51:38.192151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Feb 13 18:51:38.192169 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Feb 13 18:51:38.192187 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Feb 13 18:51:38.192205 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Feb 13 18:51:38.192223 kernel: audit: initializing netlink subsys (disabled)
Feb 13 18:51:38.192241 kernel: audit: type=2000 audit(0.223:1): state=initialized audit_enabled=0 res=1
Feb 13 18:51:38.192264 kernel: thermal_sys: Registered thermal governor 'step_wise'
Feb 13 18:51:38.192283 kernel: cpuidle: using governor menu
Feb 13 18:51:38.192301 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Feb 13 18:51:38.192319 kernel: ASID allocator initialised with 65536 entries
Feb 13 18:51:38.192337 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Feb 13 18:51:38.192355 kernel: Serial: AMBA PL011 UART driver
Feb 13 18:51:38.192373 kernel: Modules: 17360 pages in range for non-PLT usage
Feb 13 18:51:38.192390 kernel: Modules: 508880 pages in range for PLT usage
Feb 13 18:51:38.192408 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Feb 13 18:51:38.192431 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Feb 13 18:51:38.192450 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Feb 13 18:51:38.192468 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Feb 13 18:51:38.192486 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Feb 13 18:51:38.192504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Feb 13 18:51:38.192523 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Feb 13 18:51:38.192541 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Feb 13 18:51:38.192559 kernel: ACPI: Added _OSI(Module Device)
Feb 13 18:51:38.192577 kernel: ACPI: Added _OSI(Processor Device)
Feb 13 18:51:38.192599 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Feb 13 18:51:38.192618 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Feb 13 18:51:38.192636 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Feb 13 18:51:38.192655 kernel: ACPI: Interpreter enabled
Feb 13 18:51:38.192673 kernel: ACPI: Using GIC for interrupt routing
Feb 13 18:51:38.192691 kernel: ACPI: MCFG table detected, 1 entries
Feb 13 18:51:38.192710 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f])
Feb 13 18:51:38.195319 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Feb 13 18:51:38.195605 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Feb 13 18:51:38.195844 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Feb 13 18:51:38.196134 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00
Feb 13 18:51:38.196344 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f]
Feb 13 18:51:38.196370 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io  0x0000-0xffff window]
Feb 13 18:51:38.196390 kernel: acpiphp: Slot [1] registered
Feb 13 18:51:38.196409 kernel: acpiphp: Slot [2] registered
Feb 13 18:51:38.196428 kernel: acpiphp: Slot [3] registered
Feb 13 18:51:38.196456 kernel: acpiphp: Slot [4] registered
Feb 13 18:51:38.196474 kernel: acpiphp: Slot [5] registered
Feb 13 18:51:38.196492 kernel: acpiphp: Slot [6] registered
Feb 13 18:51:38.196510 kernel: acpiphp: Slot [7] registered
Feb 13 18:51:38.196528 kernel: acpiphp: Slot [8] registered
Feb 13 18:51:38.196545 kernel: acpiphp: Slot [9] registered
Feb 13 18:51:38.196563 kernel: acpiphp: Slot [10] registered
Feb 13 18:51:38.196581 kernel: acpiphp: Slot [11] registered
Feb 13 18:51:38.196599 kernel: acpiphp: Slot [12] registered
Feb 13 18:51:38.196617 kernel: acpiphp: Slot [13] registered
Feb 13 18:51:38.196640 kernel: acpiphp: Slot [14] registered
Feb 13 18:51:38.196658 kernel: acpiphp: Slot [15] registered
Feb 13 18:51:38.196677 kernel: acpiphp: Slot [16] registered
Feb 13 18:51:38.196695 kernel: acpiphp: Slot [17] registered
Feb 13 18:51:38.196714 kernel: acpiphp: Slot [18] registered
Feb 13 18:51:38.196732 kernel: acpiphp: Slot [19] registered
Feb 13 18:51:38.196750 kernel: acpiphp: Slot [20] registered
Feb 13 18:51:38.196768 kernel: acpiphp: Slot [21] registered
Feb 13 18:51:38.196786 kernel: acpiphp: Slot [22] registered
Feb 13 18:51:38.196808 kernel: acpiphp: Slot [23] registered
Feb 13 18:51:38.196827 kernel: acpiphp: Slot [24] registered
Feb 13 18:51:38.196845 kernel: acpiphp: Slot [25] registered
Feb 13 18:51:38.196863 kernel: acpiphp: Slot [26] registered
Feb 13 18:51:38.196881 kernel: acpiphp: Slot [27] registered
Feb 13 18:51:38.196898 kernel: acpiphp: Slot [28] registered
Feb 13 18:51:38.196917 kernel: acpiphp: Slot [29] registered
Feb 13 18:51:38.196935 kernel: acpiphp: Slot [30] registered
Feb 13 18:51:38.196953 kernel: acpiphp: Slot [31] registered
Feb 13 18:51:38.196995 kernel: PCI host bridge to bus 0000:00
Feb 13 18:51:38.197229 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window]
Feb 13 18:51:38.197417 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Feb 13 18:51:38.197657 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window]
Feb 13 18:51:38.200614 kernel: pci_bus 0000:00: root bus resource [bus 00-0f]
Feb 13 18:51:38.200870 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000
Feb 13 18:51:38.201129 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003
Feb 13 18:51:38.201358 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff]
Feb 13 18:51:38.201586 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802
Feb 13 18:51:38.201799 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff]
Feb 13 18:51:38.202035 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold
Feb 13 18:51:38.202263 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000
Feb 13 18:51:38.202473 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff]
Feb 13 18:51:38.202677 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref]
Feb 13 18:51:38.202887 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff]
Feb 13 18:51:38.203123 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold
Feb 13 18:51:38.205292 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref]
Feb 13 18:51:38.205540 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff]
Feb 13 18:51:38.205751 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff]
Feb 13 18:51:38.205950 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff]
Feb 13 18:51:38.206198 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff]
Feb 13 18:51:38.206401 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window]
Feb 13 18:51:38.206580 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Feb 13 18:51:38.206758 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window]
Feb 13 18:51:38.206783 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Feb 13 18:51:38.206803 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Feb 13 18:51:38.206822 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Feb 13 18:51:38.206840 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Feb 13 18:51:38.206859 kernel: iommu: Default domain type: Translated
Feb 13 18:51:38.206884 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Feb 13 18:51:38.206903 kernel: efivars: Registered efivars operations
Feb 13 18:51:38.206921 kernel: vgaarb: loaded
Feb 13 18:51:38.206940 kernel: clocksource: Switched to clocksource arch_sys_counter
Feb 13 18:51:38.206958 kernel: VFS: Disk quotas dquot_6.6.0
Feb 13 18:51:38.211316 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Feb 13 18:51:38.211342 kernel: pnp: PnP ACPI init
Feb 13 18:51:38.211656 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved
Feb 13 18:51:38.211697 kernel: pnp: PnP ACPI: found 1 devices
Feb 13 18:51:38.211717 kernel: NET: Registered PF_INET protocol family
Feb 13 18:51:38.211736 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Feb 13 18:51:38.211754 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Feb 13 18:51:38.211773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Feb 13 18:51:38.211792 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Feb 13 18:51:38.211810 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Feb 13 18:51:38.211846 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Feb 13 18:51:38.211867 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 18:51:38.211893 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Feb 13 18:51:38.211913 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Feb 13 18:51:38.211931 kernel: PCI: CLS 0 bytes, default 64
Feb 13 18:51:38.211950 kernel: kvm [1]: HYP mode not available
Feb 13 18:51:38.211985 kernel: Initialise system trusted keyrings
Feb 13 18:51:38.212009 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Feb 13 18:51:38.212028 kernel: Key type asymmetric registered
Feb 13 18:51:38.212046 kernel: Asymmetric key parser 'x509' registered
Feb 13 18:51:38.212064 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Feb 13 18:51:38.212089 kernel: io scheduler mq-deadline registered
Feb 13 18:51:38.212107 kernel: io scheduler kyber registered
Feb 13 18:51:38.212126 kernel: io scheduler bfq registered
Feb 13 18:51:38.212379 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered
Feb 13 18:51:38.212406 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Feb 13 18:51:38.212425 kernel: ACPI: button: Power Button [PWRB]
Feb 13 18:51:38.212443 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1
Feb 13 18:51:38.212462 kernel: ACPI: button: Sleep Button [SLPB]
Feb 13 18:51:38.212487 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Feb 13 18:51:38.212506 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37
Feb 13 18:51:38.212712 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012)
Feb 13 18:51:38.212738 kernel: printk: console [ttyS0] disabled
Feb 13 18:51:38.212756 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A
Feb 13 18:51:38.212774 kernel: printk: console [ttyS0] enabled
Feb 13 18:51:38.212792 kernel: printk: bootconsole [uart0] disabled
Feb 13 18:51:38.212810 kernel: thunder_xcv, ver 1.0
Feb 13 18:51:38.212828 kernel: thunder_bgx, ver 1.0
Feb 13 18:51:38.212845 kernel: nicpf, ver 1.0
Feb 13 18:51:38.212869 kernel: nicvf, ver 1.0
Feb 13 18:51:38.215287 kernel: rtc-efi rtc-efi.0: registered as rtc0
Feb 13 18:51:38.215507 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-02-13T18:51:37 UTC (1739472697)
Feb 13 18:51:38.215533 kernel: hid: raw HID events driver (C) Jiri Kosina
Feb 13 18:51:38.215552 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available
Feb 13 18:51:38.215571 kernel: watchdog: Delayed init of the lockup detector failed: -19
Feb 13 18:51:38.215590 kernel: watchdog: Hard watchdog permanently disabled
Feb 13 18:51:38.215618 kernel: NET: Registered PF_INET6 protocol family
Feb 13 18:51:38.215636 kernel: Segment Routing with IPv6
Feb 13 18:51:38.215654 kernel: In-situ OAM (IOAM) with IPv6
Feb 13 18:51:38.215672 kernel: NET: Registered PF_PACKET protocol family
Feb 13 18:51:38.215690 kernel: Key type dns_resolver registered
Feb 13 18:51:38.215708 kernel: registered taskstats version 1
Feb 13 18:51:38.215726 kernel: Loading compiled-in X.509 certificates
Feb 13 18:51:38.215745 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 987d382bd4f498c8030ef29b348ef5d6fcf1f0e3'
Feb 13 18:51:38.215763 kernel: Key type .fscrypt registered
Feb 13 18:51:38.215781 kernel: Key type fscrypt-provisioning registered
Feb 13 18:51:38.215804 kernel: ima: No TPM chip found, activating TPM-bypass!
Feb 13 18:51:38.215838 kernel: ima: Allocated hash algorithm: sha1
Feb 13 18:51:38.215861 kernel: ima: No architecture policies found
Feb 13 18:51:38.215879 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Feb 13 18:51:38.215898 kernel: clk: Disabling unused clocks
Feb 13 18:51:38.215916 kernel: Freeing unused kernel memory: 39936K
Feb 13 18:51:38.215934 kernel: Run /init as init process
Feb 13 18:51:38.215952 kernel:   with arguments:
Feb 13 18:51:38.215988 kernel:     /init
Feb 13 18:51:38.216015 kernel:   with environment:
Feb 13 18:51:38.216033 kernel:     HOME=/
Feb 13 18:51:38.216051 kernel:     TERM=linux
Feb 13 18:51:38.216068 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Feb 13 18:51:38.216091 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 18:51:38.216114 systemd[1]: Detected virtualization amazon.
Feb 13 18:51:38.216134 systemd[1]: Detected architecture arm64.
Feb 13 18:51:38.216158 systemd[1]: Running in initrd.
Feb 13 18:51:38.216177 systemd[1]: No hostname configured, using default hostname.
Feb 13 18:51:38.216196 systemd[1]: Hostname set to <localhost>.
Feb 13 18:51:38.216217 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 18:51:38.216236 systemd[1]: Queued start job for default target initrd.target.
Feb 13 18:51:38.216256 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 18:51:38.216276 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 18:51:38.216296 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Feb 13 18:51:38.216321 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 18:51:38.216342 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Feb 13 18:51:38.216362 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Feb 13 18:51:38.216384 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Feb 13 18:51:38.216405 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Feb 13 18:51:38.216424 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 18:51:38.216444 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 18:51:38.216468 systemd[1]: Reached target paths.target - Path Units.
Feb 13 18:51:38.216488 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 18:51:38.216508 systemd[1]: Reached target swap.target - Swaps.
Feb 13 18:51:38.216527 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 18:51:38.216547 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 18:51:38.216567 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 18:51:38.216587 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Feb 13 18:51:38.216607 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Feb 13 18:51:38.216626 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 18:51:38.216650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 18:51:38.216670 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 18:51:38.216690 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 18:51:38.216710 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Feb 13 18:51:38.216729 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 18:51:38.216749 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Feb 13 18:51:38.216769 systemd[1]: Starting systemd-fsck-usr.service...
Feb 13 18:51:38.216789 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 18:51:38.216813 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 18:51:38.216833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:51:38.216853 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Feb 13 18:51:38.216914 systemd-journald[252]: Collecting audit messages is disabled.
Feb 13 18:51:38.216963 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 18:51:38.221063 systemd[1]: Finished systemd-fsck-usr.service.
Feb 13 18:51:38.221088 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Feb 13 18:51:38.221109 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:51:38.221132 systemd-journald[252]: Journal started
Feb 13 18:51:38.221191 systemd-journald[252]: Runtime Journal (/run/log/journal/ec2d88f17d8487b791eb664f7285140f) is 8.0M, max 75.3M, 67.3M free.
Feb 13 18:51:38.196028 systemd-modules-load[253]: Inserted module 'overlay'
Feb 13 18:51:38.226499 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Feb 13 18:51:38.229055 kernel: Bridge firewalling registered
Feb 13 18:51:38.228872 systemd-modules-load[253]: Inserted module 'br_netfilter'
Feb 13 18:51:38.237242 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:51:38.239008 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 18:51:38.244050 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 18:51:38.248435 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Feb 13 18:51:38.264309 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 18:51:38.271366 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 18:51:38.276582 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 18:51:38.308884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:51:38.325495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 18:51:38.341344 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 18:51:38.355372 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 18:51:38.363587 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:51:38.371232 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Feb 13 18:51:38.400904 dracut-cmdline[289]: dracut-dracut-053
Feb 13 18:51:38.412239 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=539c350343a869939e6505090036e362452d8f971fd4cfbad5e8b7882835b31b
Feb 13 18:51:38.454023 systemd-resolved[282]: Positive Trust Anchors:
Feb 13 18:51:38.455349 systemd-resolved[282]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 18:51:38.455412 systemd-resolved[282]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 18:51:38.587013 kernel: SCSI subsystem initialized
Feb 13 18:51:38.595024 kernel: Loading iSCSI transport class v2.0-870.
Feb 13 18:51:38.607014 kernel: iscsi: registered transport (tcp)
Feb 13 18:51:38.629553 kernel: iscsi: registered transport (qla4xxx)
Feb 13 18:51:38.629660 kernel: QLogic iSCSI HBA Driver
Feb 13 18:51:38.701009 kernel: random: crng init done
Feb 13 18:51:38.701595 systemd-resolved[282]: Defaulting to hostname 'linux'.
Feb 13 18:51:38.705401 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 18:51:38.709798 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 18:51:38.735020 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Feb 13 18:51:38.744309 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Feb 13 18:51:38.779668 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Feb 13 18:51:38.779743 kernel: device-mapper: uevent: version 1.0.3
Feb 13 18:51:38.779770 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Feb 13 18:51:38.846009 kernel: raid6: neonx8   gen()  6563 MB/s
Feb 13 18:51:38.863000 kernel: raid6: neonx4   gen()  6514 MB/s
Feb 13 18:51:38.879999 kernel: raid6: neonx2   gen()  5438 MB/s
Feb 13 18:51:38.897003 kernel: raid6: neonx1   gen()  3956 MB/s
Feb 13 18:51:38.913998 kernel: raid6: int64x8  gen()  3638 MB/s
Feb 13 18:51:38.930999 kernel: raid6: int64x4  gen()  3706 MB/s
Feb 13 18:51:38.947999 kernel: raid6: int64x2  gen()  3606 MB/s
Feb 13 18:51:38.965781 kernel: raid6: int64x1  gen()  2765 MB/s
Feb 13 18:51:38.965838 kernel: raid6: using algorithm neonx8 gen() 6563 MB/s
Feb 13 18:51:38.983748 kernel: raid6: .... xor() 4758 MB/s, rmw enabled
Feb 13 18:51:38.983791 kernel: raid6: using neon recovery algorithm
Feb 13 18:51:38.991003 kernel: xor: measuring software checksum speed
Feb 13 18:51:38.992003 kernel:    8regs           : 11570 MB/sec
Feb 13 18:51:38.992999 kernel:    32regs          : 11898 MB/sec
Feb 13 18:51:38.995041 kernel:    arm64_neon      :  8927 MB/sec
Feb 13 18:51:38.995075 kernel: xor: using function: 32regs (11898 MB/sec)
Feb 13 18:51:39.079022 kernel: Btrfs loaded, zoned=no, fsverity=no
Feb 13 18:51:39.100602 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 18:51:39.111606 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 18:51:39.150705 systemd-udevd[470]: Using default interface naming scheme 'v255'.
Feb 13 18:51:39.159348 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 18:51:39.173267 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Feb 13 18:51:39.205476 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation
Feb 13 18:51:39.260006 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 18:51:39.271276 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 18:51:39.389607 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 18:51:39.401314 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Feb 13 18:51:39.447235 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Feb 13 18:51:39.452862 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 18:51:39.453026 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 18:51:39.453276 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 18:51:39.473893 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Feb 13 18:51:39.503226 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 18:51:39.587467 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Feb 13 18:51:39.587550 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012)
Feb 13 18:51:39.607899 kernel: ena 0000:00:05.0: ENA device version: 0.10
Feb 13 18:51:39.610042 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1
Feb 13 18:51:39.610284 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:24:5d:85:87:51
Feb 13 18:51:39.596690 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 18:51:39.598205 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:51:39.601091 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:51:39.605252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 18:51:39.605531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:51:39.607808 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:51:39.614120 (udev-worker)[532]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 18:51:39.626476 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:51:39.663806 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35
Feb 13 18:51:39.663889 kernel: nvme nvme0: pci function 0000:00:04.0
Feb 13 18:51:39.676328 kernel: nvme nvme0: 2/0/0 default/read/poll queues
Feb 13 18:51:39.683355 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Feb 13 18:51:39.683417 kernel: GPT:9289727 != 16777215
Feb 13 18:51:39.683443 kernel: GPT:Alternate GPT header not at the end of the disk.
Feb 13 18:51:39.683468 kernel: GPT:9289727 != 16777215
Feb 13 18:51:39.683491 kernel: GPT: Use GNU Parted to correct GPT errors.
Feb 13 18:51:39.683515 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 18:51:39.688635 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:51:39.700314 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Feb 13 18:51:39.746382 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:51:39.779045 kernel: BTRFS: device fsid 55beb02a-1d0d-4a3e-812c-2737f0301ec8 devid 1 transid 39 /dev/nvme0n1p3 scanned by (udev-worker) (537)
Feb 13 18:51:39.812018 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by (udev-worker) (517)
Feb 13 18:51:39.864790 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT.
Feb 13 18:51:39.886313 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM.
Feb 13 18:51:39.931842 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A.
Feb 13 18:51:39.937869 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A.
Feb 13 18:51:39.954472 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 18:51:39.970309 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Feb 13 18:51:39.980935 disk-uuid[660]: Primary Header is updated.
Feb 13 18:51:39.980935 disk-uuid[660]: Secondary Entries is updated.
Feb 13 18:51:39.980935 disk-uuid[660]: Secondary Header is updated.
Feb 13 18:51:39.992420 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 18:51:41.011058 kernel:  nvme0n1: p1 p2 p3 p4 p6 p7 p9
Feb 13 18:51:41.011667 disk-uuid[662]: The operation has completed successfully.
Feb 13 18:51:41.191289 systemd[1]: disk-uuid.service: Deactivated successfully.
Feb 13 18:51:41.192597 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Feb 13 18:51:41.242263 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Feb 13 18:51:41.252095 sh[923]: Success
Feb 13 18:51:41.279001 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Feb 13 18:51:41.390291 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Feb 13 18:51:41.412214 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Feb 13 18:51:41.415625 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Feb 13 18:51:41.449137 kernel: BTRFS info (device dm-0): first mount of filesystem 55beb02a-1d0d-4a3e-812c-2737f0301ec8
Feb 13 18:51:41.449197 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:51:41.450958 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Feb 13 18:51:41.453259 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Feb 13 18:51:41.453291 kernel: BTRFS info (device dm-0): using free space tree
Feb 13 18:51:41.476987 kernel: BTRFS info (device dm-0): enabling ssd optimizations
Feb 13 18:51:41.492287 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Feb 13 18:51:41.496156 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Feb 13 18:51:41.510218 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Feb 13 18:51:41.517135 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Feb 13 18:51:41.542485 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:51:41.542555 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:51:41.542593 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 18:51:41.550006 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 18:51:41.569683 systemd[1]: mnt-oem.mount: Deactivated successfully.
Feb 13 18:51:41.572772 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:51:41.584867 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Feb 13 18:51:41.599398 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Feb 13 18:51:41.713108 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 18:51:41.730442 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 18:51:41.784551 systemd-networkd[1130]: lo: Link UP
Feb 13 18:51:41.786678 ignition[1035]: Ignition 2.20.0
Feb 13 18:51:41.784568 systemd-networkd[1130]: lo: Gained carrier
Feb 13 18:51:41.786693 ignition[1035]: Stage: fetch-offline
Feb 13 18:51:41.788504 systemd-networkd[1130]: Enumeration completed
Feb 13 18:51:41.787181 ignition[1035]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:51:41.789079 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 18:51:41.787206 ignition[1035]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 18:51:41.791252 systemd[1]: Reached target network.target - Network.
Feb 13 18:51:41.787729 ignition[1035]: Ignition finished successfully
Feb 13 18:51:41.792666 systemd-networkd[1130]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:51:41.792673 systemd-networkd[1130]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 18:51:41.800739 systemd-networkd[1130]: eth0: Link UP
Feb 13 18:51:41.800747 systemd-networkd[1130]: eth0: Gained carrier
Feb 13 18:51:41.800763 systemd-networkd[1130]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:51:41.804738 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 18:51:41.842074 systemd-networkd[1130]: eth0: DHCPv4 address 172.31.21.163/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 18:51:41.845358 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)...
Feb 13 18:51:41.868037 ignition[1139]: Ignition 2.20.0
Feb 13 18:51:41.868066 ignition[1139]: Stage: fetch
Feb 13 18:51:41.868814 ignition[1139]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:51:41.868857 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 18:51:41.869537 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 18:51:41.879606 ignition[1139]: PUT result: OK
Feb 13 18:51:41.893305 ignition[1139]: parsed url from cmdline: ""
Feb 13 18:51:41.893331 ignition[1139]: no config URL provided
Feb 13 18:51:41.893350 ignition[1139]: reading system config file "/usr/lib/ignition/user.ign"
Feb 13 18:51:41.893402 ignition[1139]: no config at "/usr/lib/ignition/user.ign"
Feb 13 18:51:41.893434 ignition[1139]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 18:51:41.897056 ignition[1139]: PUT result: OK
Feb 13 18:51:41.898762 ignition[1139]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1
Feb 13 18:51:41.904483 ignition[1139]: GET result: OK
Feb 13 18:51:41.904580 ignition[1139]: parsing config with SHA512: 9e50f4f46609acfa2932924c213add27cbdcbd104bb18eaa2f50e49162de900dbd8163c60bd1e91ecf9b15967b3512f2e85dfa65f845c6e26426fef2c60bb115
Feb 13 18:51:41.910499 unknown[1139]: fetched base config from "system"
Feb 13 18:51:41.910527 unknown[1139]: fetched base config from "system"
Feb 13 18:51:41.911386 ignition[1139]: fetch: fetch complete
Feb 13 18:51:41.910541 unknown[1139]: fetched user config from "aws"
Feb 13 18:51:41.911397 ignition[1139]: fetch: fetch passed
Feb 13 18:51:41.911487 ignition[1139]: Ignition finished successfully
Feb 13 18:51:41.922396 systemd[1]: Finished ignition-fetch.service - Ignition (fetch).
Feb 13 18:51:41.934324 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Feb 13 18:51:41.962771 ignition[1146]: Ignition 2.20.0
Feb 13 18:51:41.962800 ignition[1146]: Stage: kargs
Feb 13 18:51:41.963929 ignition[1146]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:51:41.963956 ignition[1146]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 18:51:41.964172 ignition[1146]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 18:51:41.967700 ignition[1146]: PUT result: OK
Feb 13 18:51:41.975714 ignition[1146]: kargs: kargs passed
Feb 13 18:51:41.975844 ignition[1146]: Ignition finished successfully
Feb 13 18:51:41.980127 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Feb 13 18:51:41.993178 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Feb 13 18:51:42.017240 ignition[1152]: Ignition 2.20.0
Feb 13 18:51:42.017267 ignition[1152]: Stage: disks
Feb 13 18:51:42.018241 ignition[1152]: no configs at "/usr/lib/ignition/base.d"
Feb 13 18:51:42.018267 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 18:51:42.018434 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 18:51:42.022495 ignition[1152]: PUT result: OK
Feb 13 18:51:42.030542 ignition[1152]: disks: disks passed
Feb 13 18:51:42.030855 ignition[1152]: Ignition finished successfully
Feb 13 18:51:42.037629 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Feb 13 18:51:42.042156 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Feb 13 18:51:42.046716 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Feb 13 18:51:42.049104 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 18:51:42.052877 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 18:51:42.056856 systemd[1]: Reached target basic.target - Basic System.
Feb 13 18:51:42.077225 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Feb 13 18:51:42.124065 systemd-fsck[1160]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Feb 13 18:51:42.130595 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Feb 13 18:51:42.144418 systemd[1]: Mounting sysroot.mount - /sysroot...
Feb 13 18:51:42.236441 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 005a6458-8fd3-46f1-ab43-85ef18df7ccd r/w with ordered data mode. Quota mode: none.
Feb 13 18:51:42.237363 systemd[1]: Mounted sysroot.mount - /sysroot.
Feb 13 18:51:42.241055 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Feb 13 18:51:42.257170 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 18:51:42.272114 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Feb 13 18:51:42.272951 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Feb 13 18:51:42.273198 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Feb 13 18:51:42.273246 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 18:51:42.291514 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Feb 13 18:51:42.309446 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1179)
Feb 13 18:51:42.312403 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Feb 13 18:51:42.321690 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:51:42.321916 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:51:42.321944 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 18:51:42.336066 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 18:51:42.339469 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 18:51:42.415196 initrd-setup-root[1204]: cut: /sysroot/etc/passwd: No such file or directory
Feb 13 18:51:42.424945 initrd-setup-root[1211]: cut: /sysroot/etc/group: No such file or directory
Feb 13 18:51:42.435387 initrd-setup-root[1218]: cut: /sysroot/etc/shadow: No such file or directory
Feb 13 18:51:42.444720 initrd-setup-root[1225]: cut: /sysroot/etc/gshadow: No such file or directory
Feb 13 18:51:42.611948 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Feb 13 18:51:42.620186 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Feb 13 18:51:42.636722 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Feb 13 18:51:42.652563 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Feb 13 18:51:42.654780 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:51:42.692094 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Feb 13 18:51:42.700517 ignition[1294]: INFO     : Ignition 2.20.0
Feb 13 18:51:42.700517 ignition[1294]: INFO     : Stage: mount
Feb 13 18:51:42.703630 ignition[1294]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:51:42.703630 ignition[1294]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 18:51:42.707693 ignition[1294]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 18:51:42.710612 ignition[1294]: INFO     : PUT result: OK
Feb 13 18:51:42.714672 ignition[1294]: INFO     : mount: mount passed
Feb 13 18:51:42.716644 ignition[1294]: INFO     : Ignition finished successfully
Feb 13 18:51:42.720301 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Feb 13 18:51:42.728188 systemd[1]: Starting ignition-files.service - Ignition (files)...
Feb 13 18:51:42.757373 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Feb 13 18:51:42.782032 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/nvme0n1p6 scanned by mount (1306)
Feb 13 18:51:42.786963 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 0d7adf00-1aa3-4485-af0a-91514918afd0
Feb 13 18:51:42.787023 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm
Feb 13 18:51:42.787050 kernel: BTRFS info (device nvme0n1p6): using free space tree
Feb 13 18:51:42.792447 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations
Feb 13 18:51:42.795122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Feb 13 18:51:42.842616 ignition[1323]: INFO     : Ignition 2.20.0
Feb 13 18:51:42.844750 ignition[1323]: INFO     : Stage: files
Feb 13 18:51:42.846915 ignition[1323]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:51:42.846915 ignition[1323]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 18:51:42.851134 ignition[1323]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 18:51:42.854195 ignition[1323]: INFO     : PUT result: OK
Feb 13 18:51:42.858699 ignition[1323]: DEBUG    : files: compiled without relabeling support, skipping
Feb 13 18:51:42.861541 ignition[1323]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Feb 13 18:51:42.861541 ignition[1323]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Feb 13 18:51:42.871497 ignition[1323]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Feb 13 18:51:42.874147 ignition[1323]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Feb 13 18:51:42.877279 unknown[1323]: wrote ssh authorized keys file for user: core
Feb 13 18:51:42.879599 ignition[1323]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Feb 13 18:51:42.885415 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Feb 13 18:51:42.885415 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Feb 13 18:51:42.885415 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 18:51:42.885415 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Feb 13 18:51:42.885415 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 18:51:42.885415 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 18:51:42.885415 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 18:51:42.885415 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1
Feb 13 18:51:42.892210 systemd-networkd[1130]: eth0: Gained IPv6LL
Feb 13 18:51:43.249610 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Feb 13 18:51:43.627887 ignition[1323]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw"
Feb 13 18:51:43.633388 ignition[1323]: INFO     : files: createResultFile: createFiles: op(7): [started]  writing file "/sysroot/etc/.ignition-result.json"
Feb 13 18:51:43.633388 ignition[1323]: INFO     : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json"
Feb 13 18:51:43.633388 ignition[1323]: INFO     : files: files passed
Feb 13 18:51:43.633388 ignition[1323]: INFO     : Ignition finished successfully
Feb 13 18:51:43.634238 systemd[1]: Finished ignition-files.service - Ignition (files).
Feb 13 18:51:43.669361 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Feb 13 18:51:43.675467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Feb 13 18:51:43.699218 systemd[1]: ignition-quench.service: Deactivated successfully.
Feb 13 18:51:43.701441 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Feb 13 18:51:43.712602 initrd-setup-root-after-ignition[1351]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:51:43.712602 initrd-setup-root-after-ignition[1351]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:51:43.720040 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Feb 13 18:51:43.723738 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 18:51:43.730540 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Feb 13 18:51:43.742317 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Feb 13 18:51:43.791878 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Feb 13 18:51:43.792441 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Feb 13 18:51:43.799344 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Feb 13 18:51:43.801376 systemd[1]: Reached target initrd.target - Initrd Default Target.
Feb 13 18:51:43.803378 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Feb 13 18:51:43.819247 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Feb 13 18:51:43.846049 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 18:51:43.859243 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Feb 13 18:51:43.882191 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Feb 13 18:51:43.885305 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 18:51:43.889208 systemd[1]: Stopped target timers.target - Timer Units.
Feb 13 18:51:43.894493 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Feb 13 18:51:43.894896 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Feb 13 18:51:43.901795 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Feb 13 18:51:43.903895 systemd[1]: Stopped target basic.target - Basic System.
Feb 13 18:51:43.906142 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Feb 13 18:51:43.910503 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Feb 13 18:51:43.918424 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Feb 13 18:51:43.921080 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Feb 13 18:51:43.924741 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Feb 13 18:51:43.931459 systemd[1]: Stopped target sysinit.target - System Initialization.
Feb 13 18:51:43.933601 systemd[1]: Stopped target local-fs.target - Local File Systems.
Feb 13 18:51:43.936347 systemd[1]: Stopped target swap.target - Swaps.
Feb 13 18:51:43.942231 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Feb 13 18:51:43.942452 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Feb 13 18:51:43.945005 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Feb 13 18:51:43.952471 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 18:51:43.954864 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Feb 13 18:51:43.957480 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 18:51:43.960295 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Feb 13 18:51:43.960501 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Feb 13 18:51:43.968533 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Feb 13 18:51:43.970566 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Feb 13 18:51:43.974010 systemd[1]: ignition-files.service: Deactivated successfully.
Feb 13 18:51:43.974290 systemd[1]: Stopped ignition-files.service - Ignition (files).
Feb 13 18:51:43.991428 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Feb 13 18:51:43.995911 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Feb 13 18:51:43.996216 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 18:51:44.014017 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Feb 13 18:51:44.015935 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Feb 13 18:51:44.021165 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 18:51:44.028144 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Feb 13 18:51:44.031021 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Feb 13 18:51:44.039995 ignition[1375]: INFO     : Ignition 2.20.0
Feb 13 18:51:44.039995 ignition[1375]: INFO     : Stage: umount
Feb 13 18:51:44.049824 ignition[1375]: INFO     : no configs at "/usr/lib/ignition/base.d"
Feb 13 18:51:44.049824 ignition[1375]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/aws"
Feb 13 18:51:44.049824 ignition[1375]: INFO     : PUT http://169.254.169.254/latest/api/token: attempt #1
Feb 13 18:51:44.049824 ignition[1375]: INFO     : PUT result: OK
Feb 13 18:51:44.049824 ignition[1375]: INFO     : umount: umount passed
Feb 13 18:51:44.049824 ignition[1375]: INFO     : Ignition finished successfully
Feb 13 18:51:44.050778 systemd[1]: ignition-mount.service: Deactivated successfully.
Feb 13 18:51:44.051223 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Feb 13 18:51:44.061625 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Feb 13 18:51:44.061849 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Feb 13 18:51:44.082063 systemd[1]: ignition-disks.service: Deactivated successfully.
Feb 13 18:51:44.082167 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Feb 13 18:51:44.090418 systemd[1]: ignition-kargs.service: Deactivated successfully.
Feb 13 18:51:44.090526 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Feb 13 18:51:44.094032 systemd[1]: ignition-fetch.service: Deactivated successfully.
Feb 13 18:51:44.094341 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch).
Feb 13 18:51:44.098894 systemd[1]: Stopped target network.target - Network.
Feb 13 18:51:44.103832 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Feb 13 18:51:44.104135 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Feb 13 18:51:44.108140 systemd[1]: Stopped target paths.target - Path Units.
Feb 13 18:51:44.110720 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Feb 13 18:51:44.118285 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 18:51:44.120730 systemd[1]: Stopped target slices.target - Slice Units.
Feb 13 18:51:44.122432 systemd[1]: Stopped target sockets.target - Socket Units.
Feb 13 18:51:44.124253 systemd[1]: iscsid.socket: Deactivated successfully.
Feb 13 18:51:44.124328 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Feb 13 18:51:44.126186 systemd[1]: iscsiuio.socket: Deactivated successfully.
Feb 13 18:51:44.126252 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Feb 13 18:51:44.128178 systemd[1]: ignition-setup.service: Deactivated successfully.
Feb 13 18:51:44.128259 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Feb 13 18:51:44.130141 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Feb 13 18:51:44.130215 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Feb 13 18:51:44.132802 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Feb 13 18:51:44.138433 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Feb 13 18:51:44.142058 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Feb 13 18:51:44.142924 systemd[1]: sysroot-boot.service: Deactivated successfully.
Feb 13 18:51:44.143128 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Feb 13 18:51:44.146693 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Feb 13 18:51:44.146853 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Feb 13 18:51:44.152166 systemd-networkd[1130]: eth0: DHCPv6 lease lost
Feb 13 18:51:44.157076 systemd[1]: systemd-resolved.service: Deactivated successfully.
Feb 13 18:51:44.157291 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Feb 13 18:51:44.179199 systemd[1]: systemd-networkd.service: Deactivated successfully.
Feb 13 18:51:44.181064 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Feb 13 18:51:44.191611 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Feb 13 18:51:44.191710 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 18:51:44.201714 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Feb 13 18:51:44.212333 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Feb 13 18:51:44.212448 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Feb 13 18:51:44.215696 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Feb 13 18:51:44.215780 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:51:44.218184 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Feb 13 18:51:44.218266 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Feb 13 18:51:44.220639 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Feb 13 18:51:44.220721 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 18:51:44.226155 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 18:51:44.267008 systemd[1]: systemd-udevd.service: Deactivated successfully.
Feb 13 18:51:44.267282 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 18:51:44.275031 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Feb 13 18:51:44.275231 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Feb 13 18:51:44.279625 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Feb 13 18:51:44.279724 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 18:51:44.279859 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Feb 13 18:51:44.280816 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Feb 13 18:51:44.285885 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Feb 13 18:51:44.286002 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Feb 13 18:51:44.286716 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Feb 13 18:51:44.286792 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Feb 13 18:51:44.326456 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Feb 13 18:51:44.334365 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Feb 13 18:51:44.334483 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 18:51:44.336908 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Feb 13 18:51:44.337013 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:51:44.342040 systemd[1]: network-cleanup.service: Deactivated successfully.
Feb 13 18:51:44.342431 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Feb 13 18:51:44.358627 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Feb 13 18:51:44.358801 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Feb 13 18:51:44.364593 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Feb 13 18:51:44.381611 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Feb 13 18:51:44.400390 systemd[1]: Switching root.
Feb 13 18:51:44.448000 systemd-journald[252]: Journal stopped
Feb 13 18:51:46.117671 systemd-journald[252]: Received SIGTERM from PID 1 (systemd).
Feb 13 18:51:46.117799 kernel: SELinux:  policy capability network_peer_controls=1
Feb 13 18:51:46.117848 kernel: SELinux:  policy capability open_perms=1
Feb 13 18:51:46.117879 kernel: SELinux:  policy capability extended_socket_class=1
Feb 13 18:51:46.117907 kernel: SELinux:  policy capability always_check_network=0
Feb 13 18:51:46.117942 kernel: SELinux:  policy capability cgroup_seclabel=1
Feb 13 18:51:46.117992 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Feb 13 18:51:46.118027 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Feb 13 18:51:46.118057 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Feb 13 18:51:46.118087 kernel: audit: type=1403 audit(1739472704.710:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Feb 13 18:51:46.118133 systemd[1]: Successfully loaded SELinux policy in 49.881ms.
Feb 13 18:51:46.118184 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.400ms.
Feb 13 18:51:46.118224 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Feb 13 18:51:46.118259 systemd[1]: Detected virtualization amazon.
Feb 13 18:51:46.118302 systemd[1]: Detected architecture arm64.
Feb 13 18:51:46.118333 systemd[1]: Detected first boot.
Feb 13 18:51:46.118367 systemd[1]: Initializing machine ID from VM UUID.
Feb 13 18:51:46.118404 zram_generator::config[1418]: No configuration found.
Feb 13 18:51:46.118440 systemd[1]: Populated /etc with preset unit settings.
Feb 13 18:51:46.118475 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Feb 13 18:51:46.118510 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Feb 13 18:51:46.118546 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Feb 13 18:51:46.118582 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Feb 13 18:51:46.118614 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Feb 13 18:51:46.118647 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Feb 13 18:51:46.118676 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Feb 13 18:51:46.118709 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Feb 13 18:51:46.118754 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Feb 13 18:51:46.118784 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Feb 13 18:51:46.118827 systemd[1]: Created slice user.slice - User and Session Slice.
Feb 13 18:51:46.118862 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Feb 13 18:51:46.118892 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Feb 13 18:51:46.118922 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Feb 13 18:51:46.118958 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Feb 13 18:51:46.123060 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Feb 13 18:51:46.123110 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Feb 13 18:51:46.123150 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0...
Feb 13 18:51:46.123182 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Feb 13 18:51:46.123213 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Feb 13 18:51:46.123242 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Feb 13 18:51:46.123271 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Feb 13 18:51:46.123300 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Feb 13 18:51:46.123331 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Feb 13 18:51:46.123362 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Feb 13 18:51:46.123397 systemd[1]: Reached target slices.target - Slice Units.
Feb 13 18:51:46.123430 systemd[1]: Reached target swap.target - Swaps.
Feb 13 18:51:46.123459 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Feb 13 18:51:46.123489 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Feb 13 18:51:46.123518 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Feb 13 18:51:46.123549 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Feb 13 18:51:46.123581 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Feb 13 18:51:46.123611 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Feb 13 18:51:46.123639 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Feb 13 18:51:46.123675 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Feb 13 18:51:46.123704 systemd[1]: Mounting media.mount - External Media Directory...
Feb 13 18:51:46.123738 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Feb 13 18:51:46.123775 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Feb 13 18:51:46.123837 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Feb 13 18:51:46.123876 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Feb 13 18:51:46.123908 systemd[1]: Reached target machines.target - Containers.
Feb 13 18:51:46.123939 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Feb 13 18:51:46.123993 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 18:51:46.124035 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Feb 13 18:51:46.124064 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Feb 13 18:51:46.124099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 18:51:46.124128 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 18:51:46.124163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 18:51:46.124472 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Feb 13 18:51:46.124508 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 18:51:46.124537 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Feb 13 18:51:46.124571 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Feb 13 18:51:46.124604 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Feb 13 18:51:46.124636 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Feb 13 18:51:46.124666 systemd[1]: Stopped systemd-fsck-usr.service.
Feb 13 18:51:46.124698 systemd[1]: Starting systemd-journald.service - Journal Service...
Feb 13 18:51:46.124727 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Feb 13 18:51:46.124755 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Feb 13 18:51:46.124788 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Feb 13 18:51:46.124817 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Feb 13 18:51:46.124911 systemd-journald[1500]: Collecting audit messages is disabled.
Feb 13 18:51:46.128564 systemd[1]: verity-setup.service: Deactivated successfully.
Feb 13 18:51:46.128655 systemd[1]: Stopped verity-setup.service.
Feb 13 18:51:46.128687 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Feb 13 18:51:46.128727 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Feb 13 18:51:46.128767 systemd-journald[1500]: Journal started
Feb 13 18:51:46.128854 systemd-journald[1500]: Runtime Journal (/run/log/journal/ec2d88f17d8487b791eb664f7285140f) is 8.0M, max 75.3M, 67.3M free.
Feb 13 18:51:45.682547 systemd[1]: Queued start job for default target multi-user.target.
Feb 13 18:51:45.706235 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6.
Feb 13 18:51:45.707013 systemd[1]: systemd-journald.service: Deactivated successfully.
Feb 13 18:51:46.141829 systemd[1]: Started systemd-journald.service - Journal Service.
Feb 13 18:51:46.137705 systemd[1]: Mounted media.mount - External Media Directory.
Feb 13 18:51:46.140255 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Feb 13 18:51:46.166675 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Feb 13 18:51:46.169522 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Feb 13 18:51:46.173069 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Feb 13 18:51:46.176748 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Feb 13 18:51:46.177082 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Feb 13 18:51:46.182544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 18:51:46.183082 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 18:51:46.187938 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 18:51:46.188575 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 18:51:46.194427 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Feb 13 18:51:46.198496 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Feb 13 18:51:46.199150 kernel: loop: module loaded
Feb 13 18:51:46.203606 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 18:51:46.204023 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 18:51:46.207011 kernel: fuse: init (API version 7.39)
Feb 13 18:51:46.215753 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Feb 13 18:51:46.216370 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Feb 13 18:51:46.219585 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Feb 13 18:51:46.252845 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Feb 13 18:51:46.255173 kernel: ACPI: bus type drm_connector registered
Feb 13 18:51:46.256466 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 18:51:46.256885 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 18:51:46.268990 systemd[1]: Reached target network-pre.target - Preparation for Network.
Feb 13 18:51:46.278200 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Feb 13 18:51:46.292472 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Feb 13 18:51:46.295198 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Feb 13 18:51:46.295288 systemd[1]: Reached target local-fs.target - Local File Systems.
Feb 13 18:51:46.302201 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Feb 13 18:51:46.318317 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Feb 13 18:51:46.325533 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Feb 13 18:51:46.329490 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 18:51:46.337315 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Feb 13 18:51:46.355287 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Feb 13 18:51:46.358167 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 18:51:46.364137 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Feb 13 18:51:46.366266 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 18:51:46.373325 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Feb 13 18:51:46.383962 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Feb 13 18:51:46.391687 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Feb 13 18:51:46.397757 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Feb 13 18:51:46.400312 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Feb 13 18:51:46.403656 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Feb 13 18:51:46.426777 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Feb 13 18:51:46.430698 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Feb 13 18:51:46.455290 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Feb 13 18:51:46.457581 systemd-journald[1500]: Time spent on flushing to /var/log/journal/ec2d88f17d8487b791eb664f7285140f is 96.550ms for 892 entries.
Feb 13 18:51:46.457581 systemd-journald[1500]: System Journal (/var/log/journal/ec2d88f17d8487b791eb664f7285140f) is 8.0M, max 195.6M, 187.6M free.
Feb 13 18:51:46.574553 systemd-journald[1500]: Received client request to flush runtime journal.
Feb 13 18:51:46.574651 kernel: loop0: detected capacity change from 0 to 116784
Feb 13 18:51:46.580737 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Feb 13 18:51:46.602093 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Feb 13 18:51:46.611113 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Feb 13 18:51:46.614862 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Feb 13 18:51:46.615998 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Feb 13 18:51:46.656029 kernel: loop1: detected capacity change from 0 to 194096
Feb 13 18:51:46.663863 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Feb 13 18:51:46.676642 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Feb 13 18:51:46.712374 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Feb 13 18:51:46.730101 kernel: loop2: detected capacity change from 0 to 53784
Feb 13 18:51:46.726028 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Feb 13 18:51:46.802829 systemd-tmpfiles[1564]: ACLs are not supported, ignoring.
Feb 13 18:51:46.802877 systemd-tmpfiles[1564]: ACLs are not supported, ignoring.
Feb 13 18:51:46.808807 udevadm[1567]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Feb 13 18:51:46.826276 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Feb 13 18:51:46.867018 kernel: loop3: detected capacity change from 0 to 113552
Feb 13 18:51:46.924455 kernel: loop4: detected capacity change from 0 to 116784
Feb 13 18:51:46.962306 kernel: loop5: detected capacity change from 0 to 194096
Feb 13 18:51:47.001745 kernel: loop6: detected capacity change from 0 to 53784
Feb 13 18:51:47.032001 kernel: loop7: detected capacity change from 0 to 113552
Feb 13 18:51:47.057277 (sd-merge)[1572]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'.
Feb 13 18:51:47.060423 (sd-merge)[1572]: Merged extensions into '/usr'.
Feb 13 18:51:47.075740 systemd[1]: Reloading requested from client PID 1547 ('systemd-sysext') (unit systemd-sysext.service)...
Feb 13 18:51:47.075776 systemd[1]: Reloading...
Feb 13 18:51:47.298995 zram_generator::config[1599]: No configuration found.
Feb 13 18:51:47.360227 ldconfig[1542]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Feb 13 18:51:47.613195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:51:47.728940 systemd[1]: Reloading finished in 651 ms.
Feb 13 18:51:47.777024 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Feb 13 18:51:47.780375 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Feb 13 18:51:47.784404 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Feb 13 18:51:47.799238 systemd[1]: Starting ensure-sysext.service...
Feb 13 18:51:47.811149 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Feb 13 18:51:47.819438 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Feb 13 18:51:47.841381 systemd[1]: Reloading requested from client PID 1651 ('systemctl') (unit ensure-sysext.service)...
Feb 13 18:51:47.841427 systemd[1]: Reloading...
Feb 13 18:51:47.859027 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Feb 13 18:51:47.859609 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Feb 13 18:51:47.861546 systemd-tmpfiles[1652]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Feb 13 18:51:47.863157 systemd-tmpfiles[1652]: ACLs are not supported, ignoring.
Feb 13 18:51:47.863303 systemd-tmpfiles[1652]: ACLs are not supported, ignoring.
Feb 13 18:51:47.872808 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 18:51:47.872834 systemd-tmpfiles[1652]: Skipping /boot
Feb 13 18:51:47.903701 systemd-tmpfiles[1652]: Detected autofs mount point /boot during canonicalization of boot.
Feb 13 18:51:47.903727 systemd-tmpfiles[1652]: Skipping /boot
Feb 13 18:51:47.985538 systemd-udevd[1653]: Using default interface naming scheme 'v255'.
Feb 13 18:51:48.037022 zram_generator::config[1682]: No configuration found.
Feb 13 18:51:48.179929 (udev-worker)[1716]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 18:51:48.439059 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:51:48.459006 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1716)
Feb 13 18:51:48.692276 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped.
Feb 13 18:51:48.693098 systemd[1]: Reloading finished in 850 ms.
Feb 13 18:51:48.718365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Feb 13 18:51:48.724759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Feb 13 18:51:48.756838 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Feb 13 18:51:48.812558 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM.
Feb 13 18:51:48.824564 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 18:51:48.845908 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Feb 13 18:51:48.848479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Feb 13 18:51:48.852314 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Feb 13 18:51:48.858372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Feb 13 18:51:48.866648 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Feb 13 18:51:48.873298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Feb 13 18:51:48.881479 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Feb 13 18:51:48.883714 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Feb 13 18:51:48.889848 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Feb 13 18:51:48.895316 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Feb 13 18:51:48.902989 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Feb 13 18:51:48.921257 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Feb 13 18:51:48.923401 systemd[1]: Reached target time-set.target - System Time Set.
Feb 13 18:51:48.930940 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Feb 13 18:51:48.940300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Feb 13 18:51:48.947341 systemd[1]: Finished ensure-sysext.service.
Feb 13 18:51:48.992284 lvm[1848]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 18:51:48.993610 systemd[1]: modprobe@loop.service: Deactivated successfully.
Feb 13 18:51:48.993957 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Feb 13 18:51:48.996892 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Feb 13 18:51:49.000084 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Feb 13 18:51:49.005866 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Feb 13 18:51:49.047945 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Feb 13 18:51:49.051219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Feb 13 18:51:49.051696 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Feb 13 18:51:49.052674 systemd[1]: modprobe@drm.service: Deactivated successfully.
Feb 13 18:51:49.052929 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Feb 13 18:51:49.065686 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Feb 13 18:51:49.089415 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Feb 13 18:51:49.092286 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Feb 13 18:51:49.100649 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Feb 13 18:51:49.115597 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Feb 13 18:51:49.159385 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Feb 13 18:51:49.167065 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Feb 13 18:51:49.171844 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Feb 13 18:51:49.181553 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Feb 13 18:51:49.192423 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Feb 13 18:51:49.203160 augenrules[1894]: No rules
Feb 13 18:51:49.205619 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 18:51:49.206697 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 18:51:49.226356 lvm[1891]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Feb 13 18:51:49.235555 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Feb 13 18:51:49.254129 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Feb 13 18:51:49.288119 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Feb 13 18:51:49.294124 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Feb 13 18:51:49.381183 systemd-networkd[1860]: lo: Link UP
Feb 13 18:51:49.381204 systemd-networkd[1860]: lo: Gained carrier
Feb 13 18:51:49.384223 systemd-networkd[1860]: Enumeration completed
Feb 13 18:51:49.384442 systemd[1]: Started systemd-networkd.service - Network Configuration.
Feb 13 18:51:49.386246 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:51:49.386253 systemd-networkd[1860]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Feb 13 18:51:49.388892 systemd-networkd[1860]: eth0: Link UP
Feb 13 18:51:49.389389 systemd-networkd[1860]: eth0: Gained carrier
Feb 13 18:51:49.389423 systemd-networkd[1860]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Feb 13 18:51:49.399333 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Feb 13 18:51:49.402240 systemd-resolved[1862]: Positive Trust Anchors:
Feb 13 18:51:49.402278 systemd-resolved[1862]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Feb 13 18:51:49.402341 systemd-resolved[1862]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Feb 13 18:51:49.404299 systemd-networkd[1860]: eth0: DHCPv4 address 172.31.21.163/20, gateway 172.31.16.1 acquired from 172.31.16.1
Feb 13 18:51:49.411349 systemd-resolved[1862]: Defaulting to hostname 'linux'.
Feb 13 18:51:49.414740 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Feb 13 18:51:49.419947 systemd[1]: Reached target network.target - Network.
Feb 13 18:51:49.423775 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Feb 13 18:51:49.426678 systemd[1]: Reached target sysinit.target - System Initialization.
Feb 13 18:51:49.428915 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Feb 13 18:51:49.431415 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Feb 13 18:51:49.434175 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Feb 13 18:51:49.436354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Feb 13 18:51:49.438594 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Feb 13 18:51:49.441294 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Feb 13 18:51:49.441354 systemd[1]: Reached target paths.target - Path Units.
Feb 13 18:51:49.443743 systemd[1]: Reached target timers.target - Timer Units.
Feb 13 18:51:49.447263 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Feb 13 18:51:49.452092 systemd[1]: Starting docker.socket - Docker Socket for the API...
Feb 13 18:51:49.461522 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Feb 13 18:51:49.464865 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Feb 13 18:51:49.467523 systemd[1]: Reached target sockets.target - Socket Units.
Feb 13 18:51:49.470080 systemd[1]: Reached target basic.target - Basic System.
Feb 13 18:51:49.471984 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Feb 13 18:51:49.472043 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Feb 13 18:51:49.482354 systemd[1]: Starting containerd.service - containerd container runtime...
Feb 13 18:51:49.489550 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent...
Feb 13 18:51:49.500360 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Feb 13 18:51:49.507039 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Feb 13 18:51:49.520325 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Feb 13 18:51:49.522962 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Feb 13 18:51:49.544345 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Feb 13 18:51:49.554349 systemd[1]: Started ntpd.service - Network Time Service.
Feb 13 18:51:49.558750 jq[1918]: false
Feb 13 18:51:49.568148 systemd[1]: Starting setup-oem.service - Setup OEM...
Feb 13 18:51:49.579372 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Feb 13 18:51:49.591414 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Feb 13 18:51:49.602938 systemd[1]: Starting systemd-logind.service - User Login Management...
Feb 13 18:51:49.606066 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Feb 13 18:51:49.607100 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Feb 13 18:51:49.609383 systemd[1]: Starting update-engine.service - Update Engine...
Feb 13 18:51:49.616227 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Feb 13 18:51:49.624869 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Feb 13 18:51:49.625314 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Feb 13 18:51:49.630733 dbus-daemon[1917]: [system] SELinux support is enabled
Feb 13 18:51:49.631086 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Feb 13 18:51:49.639756 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Feb 13 18:51:49.639836 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Feb 13 18:51:49.642242 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Feb 13 18:51:49.642287 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Feb 13 18:51:49.652124 dbus-daemon[1917]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1860 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0")
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found loop4
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found loop5
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found loop6
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found loop7
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found nvme0n1
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found nvme0n1p1
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found nvme0n1p2
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found nvme0n1p3
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found usr
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found nvme0n1p4
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found nvme0n1p6
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found nvme0n1p7
Feb 13 18:51:49.663843 extend-filesystems[1919]: Found nvme0n1p9
Feb 13 18:51:49.663843 extend-filesystems[1919]: Checking size of /dev/nvme0n1p9
Feb 13 18:51:49.688501 dbus-daemon[1917]: [system] Successfully activated service 'org.freedesktop.systemd1'
Feb 13 18:51:49.715737 systemd[1]: Starting systemd-hostnamed.service - Hostname Service...
Feb 13 18:51:49.738833 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Feb 13 18:51:49.740341 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Feb 13 18:51:49.754431 jq[1930]: true
Feb 13 18:51:49.787079 extend-filesystems[1919]: Resized partition /dev/nvme0n1p9
Feb 13 18:51:49.781705 systemd[1]: motdgen.service: Deactivated successfully.
Feb 13 18:51:49.782081 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Feb 13 18:51:49.802757 (ntainerd)[1948]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Feb 13 18:51:49.802479 ntpd[1921]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:01:18 UTC 2025 (1): Starting
Feb 13 18:51:49.803917 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: ntpd 4.2.8p17@1.4004-o Thu Feb 13 17:01:18 UTC 2025 (1): Starting
Feb 13 18:51:49.803917 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 18:51:49.803917 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: ----------------------------------------------------
Feb 13 18:51:49.803917 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: ntp-4 is maintained by Network Time Foundation,
Feb 13 18:51:49.803917 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 18:51:49.803917 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: corporation.  Support and training for ntp-4 are
Feb 13 18:51:49.803917 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: available at https://www.nwtime.org/support
Feb 13 18:51:49.803917 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: ----------------------------------------------------
Feb 13 18:51:49.802525 ntpd[1921]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp
Feb 13 18:51:49.813364 systemd[1]: Started update-engine.service - Update Engine.
Feb 13 18:51:49.816002 extend-filesystems[1956]: resize2fs 1.47.1 (20-May-2024)
Feb 13 18:51:49.818163 update_engine[1929]: I20250213 18:51:49.810720  1929 main.cc:92] Flatcar Update Engine starting
Feb 13 18:51:49.818163 update_engine[1929]: I20250213 18:51:49.813439  1929 update_check_scheduler.cc:74] Next update check in 10m47s
Feb 13 18:51:49.802545 ntpd[1921]: ----------------------------------------------------
Feb 13 18:51:49.802563 ntpd[1921]: ntp-4 is maintained by Network Time Foundation,
Feb 13 18:51:49.802581 ntpd[1921]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
Feb 13 18:51:49.802603 ntpd[1921]: corporation.  Support and training for ntp-4 are
Feb 13 18:51:49.802621 ntpd[1921]: available at https://www.nwtime.org/support
Feb 13 18:51:49.802638 ntpd[1921]: ----------------------------------------------------
Feb 13 18:51:49.820749 ntpd[1921]: proto: precision = 0.096 usec (-23)
Feb 13 18:51:49.821071 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: proto: precision = 0.096 usec (-23)
Feb 13 18:51:49.821283 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Feb 13 18:51:49.836468 ntpd[1921]: basedate set to 2025-02-01
Feb 13 18:51:49.839176 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: basedate set to 2025-02-01
Feb 13 18:51:49.839176 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: gps base set to 2025-02-02 (week 2352)
Feb 13 18:51:49.836533 ntpd[1921]: gps base set to 2025-02-02 (week 2352)
Feb 13 18:51:49.843043 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks
Feb 13 18:51:49.850280 jq[1946]: true
Feb 13 18:51:49.858075 ntpd[1921]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 18:51:49.858184 ntpd[1921]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 18:51:49.858363 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: Listen and drop on 0 v6wildcard [::]:123
Feb 13 18:51:49.858363 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: Listen and drop on 1 v4wildcard 0.0.0.0:123
Feb 13 18:51:49.858497 ntpd[1921]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 18:51:49.858557 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: Listen normally on 2 lo 127.0.0.1:123
Feb 13 18:51:49.858612 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: Listen normally on 3 eth0 172.31.21.163:123
Feb 13 18:51:49.858560 ntpd[1921]: Listen normally on 3 eth0 172.31.21.163:123
Feb 13 18:51:49.858709 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: Listen normally on 4 lo [::1]:123
Feb 13 18:51:49.858709 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: bind(21) AF_INET6 fe80::424:5dff:fe85:8751%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 18:51:49.858627 ntpd[1921]: Listen normally on 4 lo [::1]:123
Feb 13 18:51:49.858863 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: unable to create socket on eth0 (5) for fe80::424:5dff:fe85:8751%2#123
Feb 13 18:51:49.858863 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: failed to init interface for address fe80::424:5dff:fe85:8751%2
Feb 13 18:51:49.858863 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: Listening on routing socket on fd #21 for interface updates
Feb 13 18:51:49.858695 ntpd[1921]: bind(21) AF_INET6 fe80::424:5dff:fe85:8751%2#123 flags 0x11 failed: Cannot assign requested address
Feb 13 18:51:49.858734 ntpd[1921]: unable to create socket on eth0 (5) for fe80::424:5dff:fe85:8751%2#123
Feb 13 18:51:49.858762 ntpd[1921]: failed to init interface for address fe80::424:5dff:fe85:8751%2
Feb 13 18:51:49.858811 ntpd[1921]: Listening on routing socket on fd #21 for interface updates
Feb 13 18:51:49.892762 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 18:51:49.899688 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 18:51:49.899688 ntpd[1921]: 13 Feb 18:51:49 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 18:51:49.895664 ntpd[1921]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
Feb 13 18:51:49.974780 systemd[1]: Finished setup-oem.service - Setup OEM.
Feb 13 18:51:50.002479 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915
Feb 13 18:51:50.019909 extend-filesystems[1956]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required
Feb 13 18:51:50.019909 extend-filesystems[1956]: old_desc_blocks = 1, new_desc_blocks = 1
Feb 13 18:51:50.019909 extend-filesystems[1956]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long.
Feb 13 18:51:50.039169 extend-filesystems[1919]: Resized filesystem in /dev/nvme0n1p9
Feb 13 18:51:50.024547 systemd[1]: extend-filesystems.service: Deactivated successfully.
Feb 13 18:51:50.025394 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Feb 13 18:51:50.069625 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Feb 13 18:51:50.100331 coreos-metadata[1916]: Feb 13 18:51:50.099 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 18:51:50.104461 systemd-logind[1928]: Watching system buttons on /dev/input/event0 (Power Button)
Feb 13 18:51:50.104513 systemd-logind[1928]: Watching system buttons on /dev/input/event1 (Sleep Button)
Feb 13 18:51:50.114860 coreos-metadata[1916]: Feb 13 18:51:50.105 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1
Feb 13 18:51:50.114860 coreos-metadata[1916]: Feb 13 18:51:50.106 INFO Fetch successful
Feb 13 18:51:50.114860 coreos-metadata[1916]: Feb 13 18:51:50.106 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1
Feb 13 18:51:50.114860 coreos-metadata[1916]: Feb 13 18:51:50.111 INFO Fetch successful
Feb 13 18:51:50.114860 coreos-metadata[1916]: Feb 13 18:51:50.111 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1
Feb 13 18:51:50.117373 coreos-metadata[1916]: Feb 13 18:51:50.116 INFO Fetch successful
Feb 13 18:51:50.117373 coreos-metadata[1916]: Feb 13 18:51:50.116 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1
Feb 13 18:51:50.117537 bash[1996]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 18:51:50.118214 coreos-metadata[1916]: Feb 13 18:51:50.118 INFO Fetch successful
Feb 13 18:51:50.118214 coreos-metadata[1916]: Feb 13 18:51:50.118 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1
Feb 13 18:51:50.120873 coreos-metadata[1916]: Feb 13 18:51:50.120 INFO Fetch failed with 404: resource not found
Feb 13 18:51:50.120873 coreos-metadata[1916]: Feb 13 18:51:50.120 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1
Feb 13 18:51:50.124386 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (1730)
Feb 13 18:51:50.123105 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Feb 13 18:51:50.125926 coreos-metadata[1916]: Feb 13 18:51:50.125 INFO Fetch successful
Feb 13 18:51:50.125926 coreos-metadata[1916]: Feb 13 18:51:50.125 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1
Feb 13 18:51:50.130612 coreos-metadata[1916]: Feb 13 18:51:50.128 INFO Fetch successful
Feb 13 18:51:50.130612 coreos-metadata[1916]: Feb 13 18:51:50.128 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1
Feb 13 18:51:50.133460 coreos-metadata[1916]: Feb 13 18:51:50.133 INFO Fetch successful
Feb 13 18:51:50.133460 coreos-metadata[1916]: Feb 13 18:51:50.133 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1
Feb 13 18:51:50.135944 coreos-metadata[1916]: Feb 13 18:51:50.135 INFO Fetch successful
Feb 13 18:51:50.135944 coreos-metadata[1916]: Feb 13 18:51:50.135 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1
Feb 13 18:51:50.142048 coreos-metadata[1916]: Feb 13 18:51:50.140 INFO Fetch successful
Feb 13 18:51:50.195219 systemd-logind[1928]: New seat seat0.
Feb 13 18:51:50.279604 systemd[1]: Starting sshkeys.service...
Feb 13 18:51:50.281341 systemd[1]: Started systemd-logind.service - User Login Management.
Feb 13 18:51:50.334662 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys.
Feb 13 18:51:50.344654 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)...
Feb 13 18:51:50.365764 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent.
Feb 13 18:51:50.369624 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Feb 13 18:51:50.436200 locksmithd[1959]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Feb 13 18:51:50.456886 dbus-daemon[1917]: [system] Successfully activated service 'org.freedesktop.hostname1'
Feb 13 18:51:50.457155 systemd[1]: Started systemd-hostnamed.service - Hostname Service.
Feb 13 18:51:50.464230 dbus-daemon[1917]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1944 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0")
Feb 13 18:51:50.484280 systemd[1]: Starting polkit.service - Authorization Manager...
Feb 13 18:51:50.520703 polkitd[2061]: Started polkitd version 121
Feb 13 18:51:50.535308 polkitd[2061]: Loading rules from directory /etc/polkit-1/rules.d
Feb 13 18:51:50.535483 polkitd[2061]: Loading rules from directory /usr/share/polkit-1/rules.d
Feb 13 18:51:50.550606 polkitd[2061]: Finished loading, compiling and executing 2 rules
Feb 13 18:51:50.551915 dbus-daemon[1917]: [system] Successfully activated service 'org.freedesktop.PolicyKit1'
Feb 13 18:51:50.553923 systemd[1]: Started polkit.service - Authorization Manager.
Feb 13 18:51:50.556605 polkitd[2061]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Feb 13 18:51:50.572008 containerd[1948]: time="2025-02-13T18:51:50.570154364Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Feb 13 18:51:50.614486 systemd-hostnamed[1944]: Hostname set to <ip-172-31-21-163> (transient)
Feb 13 18:51:50.614773 systemd-resolved[1862]: System hostname changed to 'ip-172-31-21-163'.
Feb 13 18:51:50.699205 systemd-networkd[1860]: eth0: Gained IPv6LL
Feb 13 18:51:50.708065 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Feb 13 18:51:50.712623 systemd[1]: Reached target network-online.target - Network is Online.
Feb 13 18:51:50.731924 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent.
Feb 13 18:51:50.741618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:51:50.750826 coreos-metadata[2046]: Feb 13 18:51:50.745 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1
Feb 13 18:51:50.746659 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Feb 13 18:51:50.756644 coreos-metadata[2046]: Feb 13 18:51:50.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1
Feb 13 18:51:50.756644 coreos-metadata[2046]: Feb 13 18:51:50.756 INFO Fetch successful
Feb 13 18:51:50.756644 coreos-metadata[2046]: Feb 13 18:51:50.756 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1
Feb 13 18:51:50.769000 coreos-metadata[2046]: Feb 13 18:51:50.767 INFO Fetch successful
Feb 13 18:51:50.783920 unknown[2046]: wrote ssh authorized keys file for user: core
Feb 13 18:51:50.839959 containerd[1948]: time="2025-02-13T18:51:50.839037058Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853023 containerd[1948]: time="2025-02-13T18:51:50.852454162Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853023 containerd[1948]: time="2025-02-13T18:51:50.852538510Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Feb 13 18:51:50.853023 containerd[1948]: time="2025-02-13T18:51:50.852599026Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Feb 13 18:51:50.853023 containerd[1948]: time="2025-02-13T18:51:50.852929878Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Feb 13 18:51:50.853023 containerd[1948]: time="2025-02-13T18:51:50.852999706Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853320 containerd[1948]: time="2025-02-13T18:51:50.853136662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853320 containerd[1948]: time="2025-02-13T18:51:50.853167262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853669 containerd[1948]: time="2025-02-13T18:51:50.853460158Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853669 containerd[1948]: time="2025-02-13T18:51:50.853504246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853669 containerd[1948]: time="2025-02-13T18:51:50.853538290Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853669 containerd[1948]: time="2025-02-13T18:51:50.853561738Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Feb 13 18:51:50.853847 containerd[1948]: time="2025-02-13T18:51:50.853721014Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Feb 13 18:51:50.860212 containerd[1948]: time="2025-02-13T18:51:50.860129506Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Feb 13 18:51:50.861124 containerd[1948]: time="2025-02-13T18:51:50.860413270Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Feb 13 18:51:50.861124 containerd[1948]: time="2025-02-13T18:51:50.860458450Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Feb 13 18:51:50.861124 containerd[1948]: time="2025-02-13T18:51:50.860691058Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Feb 13 18:51:50.861124 containerd[1948]: time="2025-02-13T18:51:50.860798734Z" level=info msg="metadata content store policy set" policy=shared
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.869138758Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.869345122Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.869390662Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.869434546Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.869472550Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.869748778Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.870195922Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.870426694Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.870462670Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.870496930Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.870543514Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.870578182Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.870609610Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Feb 13 18:51:50.873005 containerd[1948]: time="2025-02-13T18:51:50.870639994Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Feb 13 18:51:50.873702 containerd[1948]: time="2025-02-13T18:51:50.870671770Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Feb 13 18:51:50.873702 containerd[1948]: time="2025-02-13T18:51:50.870701338Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Feb 13 18:51:50.873702 containerd[1948]: time="2025-02-13T18:51:50.870731938Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Feb 13 18:51:50.873702 containerd[1948]: time="2025-02-13T18:51:50.870759514Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Feb 13 18:51:50.873702 containerd[1948]: time="2025-02-13T18:51:50.870799294Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.873702 containerd[1948]: time="2025-02-13T18:51:50.870833722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.870862594Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878577238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878624926Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878683066Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878719162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878777206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878809966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878872786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878933842Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.878994202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.879027526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.880309 containerd[1948]: time="2025-02-13T18:51:50.879087862Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Feb 13 18:51:50.881066 containerd[1948]: time="2025-02-13T18:51:50.879613174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.881378 containerd[1948]: time="2025-02-13T18:51:50.881320366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.881475 containerd[1948]: time="2025-02-13T18:51:50.881396386Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Feb 13 18:51:50.882117 containerd[1948]: time="2025-02-13T18:51:50.882058318Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Feb 13 18:51:50.882194 containerd[1948]: time="2025-02-13T18:51:50.882145774Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Feb 13 18:51:50.887990 containerd[1948]: time="2025-02-13T18:51:50.886021846Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Feb 13 18:51:50.887990 containerd[1948]: time="2025-02-13T18:51:50.886133926Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Feb 13 18:51:50.887990 containerd[1948]: time="2025-02-13T18:51:50.886184650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.887990 containerd[1948]: time="2025-02-13T18:51:50.886220350Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Feb 13 18:51:50.887990 containerd[1948]: time="2025-02-13T18:51:50.886268614Z" level=info msg="NRI interface is disabled by configuration."
Feb 13 18:51:50.887990 containerd[1948]: time="2025-02-13T18:51:50.886298062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Feb 13 18:51:50.897043 containerd[1948]: time="2025-02-13T18:51:50.887153194Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Feb 13 18:51:50.897043 containerd[1948]: time="2025-02-13T18:51:50.887292442Z" level=info msg="Connect containerd service"
Feb 13 18:51:50.897043 containerd[1948]: time="2025-02-13T18:51:50.887943802Z" level=info msg="using legacy CRI server"
Feb 13 18:51:50.897043 containerd[1948]: time="2025-02-13T18:51:50.887993374Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Feb 13 18:51:50.897043 containerd[1948]: time="2025-02-13T18:51:50.890634046Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Feb 13 18:51:50.904019 containerd[1948]: time="2025-02-13T18:51:50.899562586Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 18:51:50.904019 containerd[1948]: time="2025-02-13T18:51:50.899779366Z" level=info msg="Start subscribing containerd event"
Feb 13 18:51:50.904019 containerd[1948]: time="2025-02-13T18:51:50.899861014Z" level=info msg="Start recovering state"
Feb 13 18:51:50.904019 containerd[1948]: time="2025-02-13T18:51:50.900002854Z" level=info msg="Start event monitor"
Feb 13 18:51:50.904019 containerd[1948]: time="2025-02-13T18:51:50.900025930Z" level=info msg="Start snapshots syncer"
Feb 13 18:51:50.904019 containerd[1948]: time="2025-02-13T18:51:50.900051658Z" level=info msg="Start cni network conf syncer for default"
Feb 13 18:51:50.904019 containerd[1948]: time="2025-02-13T18:51:50.900070942Z" level=info msg="Start streaming server"
Feb 13 18:51:50.904019 containerd[1948]: time="2025-02-13T18:51:50.901125826Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Feb 13 18:51:50.905686 containerd[1948]: time="2025-02-13T18:51:50.904276630Z" level=info msg=serving... address=/run/containerd/containerd.sock
Feb 13 18:51:50.913998 update-ssh-keys[2112]: Updated "/home/core/.ssh/authorized_keys"
Feb 13 18:51:50.917573 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys).
Feb 13 18:51:50.924207 containerd[1948]: time="2025-02-13T18:51:50.923161318Z" level=info msg="containerd successfully booted in 0.356097s"
Feb 13 18:51:50.926195 systemd[1]: Started containerd.service - containerd container runtime.
Feb 13 18:51:50.933520 systemd[1]: Finished sshkeys.service.
Feb 13 18:51:50.945250 amazon-ssm-agent[2102]: Initializing new seelog logger
Feb 13 18:51:50.946300 amazon-ssm-agent[2102]: New Seelog Logger Creation Complete
Feb 13 18:51:50.947231 amazon-ssm-agent[2102]: 2025/02/13 18:51:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 18:51:50.947231 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 18:51:50.952195 amazon-ssm-agent[2102]: 2025/02/13 18:51:50 processing appconfig overrides
Feb 13 18:51:50.955893 amazon-ssm-agent[2102]: 2025/02/13 18:51:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 18:51:50.955893 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 18:51:50.955893 amazon-ssm-agent[2102]: 2025-02-13 18:51:50 INFO Proxy environment variables:
Feb 13 18:51:50.956478 amazon-ssm-agent[2102]: 2025/02/13 18:51:50 processing appconfig overrides
Feb 13 18:51:50.959822 amazon-ssm-agent[2102]: 2025/02/13 18:51:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 18:51:50.959822 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 18:51:50.959822 amazon-ssm-agent[2102]: 2025/02/13 18:51:50 processing appconfig overrides
Feb 13 18:51:50.969602 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Feb 13 18:51:50.978903 amazon-ssm-agent[2102]: 2025/02/13 18:51:50 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 18:51:50.978903 amazon-ssm-agent[2102]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json.
Feb 13 18:51:50.978903 amazon-ssm-agent[2102]: 2025/02/13 18:51:50 processing appconfig overrides
Feb 13 18:51:51.054475 sshd_keygen[1962]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Feb 13 18:51:51.055432 amazon-ssm-agent[2102]: 2025-02-13 18:51:50 INFO https_proxy:
Feb 13 18:51:51.111099 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Feb 13 18:51:51.128107 systemd[1]: Starting issuegen.service - Generate /run/issue...
Feb 13 18:51:51.141367 systemd[1]: Started sshd@0-172.31.21.163:22-139.178.68.195:51684.service - OpenSSH per-connection server daemon (139.178.68.195:51684).
Feb 13 18:51:51.156132 amazon-ssm-agent[2102]: 2025-02-13 18:51:50 INFO http_proxy:
Feb 13 18:51:51.176785 systemd[1]: issuegen.service: Deactivated successfully.
Feb 13 18:51:51.179957 systemd[1]: Finished issuegen.service - Generate /run/issue.
Feb 13 18:51:51.196581 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Feb 13 18:51:51.254435 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Feb 13 18:51:51.256499 amazon-ssm-agent[2102]: 2025-02-13 18:51:50 INFO no_proxy:
Feb 13 18:51:51.268771 systemd[1]: Started getty@tty1.service - Getty on tty1.
Feb 13 18:51:51.280890 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0.
Feb 13 18:51:51.284414 systemd[1]: Reached target getty.target - Login Prompts.
Feb 13 18:51:51.357529 amazon-ssm-agent[2102]: 2025-02-13 18:51:50 INFO Checking if agent identity type OnPrem can be assumed
Feb 13 18:51:51.427894 sshd[2143]: Accepted publickey for core from 139.178.68.195 port 51684 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:51:51.429811 sshd-session[2143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:51:51.450041 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Feb 13 18:51:51.457013 amazon-ssm-agent[2102]: 2025-02-13 18:51:50 INFO Checking if agent identity type EC2 can be assumed
Feb 13 18:51:51.467062 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Feb 13 18:51:51.475387 systemd-logind[1928]: New session 1 of user core.
Feb 13 18:51:51.507022 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Feb 13 18:51:51.523619 systemd[1]: Starting user@500.service - User Manager for UID 500...
Feb 13 18:51:51.543075 (systemd)[2156]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Feb 13 18:51:51.556422 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO Agent will take identity from EC2
Feb 13 18:51:51.656170 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 18:51:51.755521 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 18:51:51.853450 systemd[2156]: Queued start job for default target default.target.
Feb 13 18:51:51.861079 systemd[2156]: Created slice app.slice - User Application Slice.
Feb 13 18:51:51.863500 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [amazon-ssm-agent] using named pipe channel for IPC
Feb 13 18:51:51.861149 systemd[2156]: Reached target paths.target - Paths.
Feb 13 18:51:51.861182 systemd[2156]: Reached target timers.target - Timers.
Feb 13 18:51:51.865217 systemd[2156]: Starting dbus.socket - D-Bus User Message Bus Socket...
Feb 13 18:51:51.899885 systemd[2156]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Feb 13 18:51:51.900197 systemd[2156]: Reached target sockets.target - Sockets.
Feb 13 18:51:51.900243 systemd[2156]: Reached target basic.target - Basic System.
Feb 13 18:51:51.900478 systemd[1]: Started user@500.service - User Manager for UID 500.
Feb 13 18:51:51.903095 systemd[2156]: Reached target default.target - Main User Target.
Feb 13 18:51:51.903194 systemd[2156]: Startup finished in 338ms.
Feb 13 18:51:51.912646 systemd[1]: Started session-1.scope - Session 1 of User core.
Feb 13 18:51:51.956019 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0
Feb 13 18:51:52.056298 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [amazon-ssm-agent] OS: linux, Arch: arm64
Feb 13 18:51:52.097528 systemd[1]: Started sshd@1-172.31.21.163:22-139.178.68.195:51692.service - OpenSSH per-connection server daemon (139.178.68.195:51692).
Feb 13 18:51:52.154685 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [amazon-ssm-agent] Starting Core Agent
Feb 13 18:51:52.194294 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [amazon-ssm-agent] registrar detected. Attempting registration
Feb 13 18:51:52.194294 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [Registrar] Starting registrar module
Feb 13 18:51:52.194294 amazon-ssm-agent[2102]: 2025-02-13 18:51:51 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration
Feb 13 18:51:52.194294 amazon-ssm-agent[2102]: 2025-02-13 18:51:52 INFO [EC2Identity] EC2 registration was successful.
Feb 13 18:51:52.194294 amazon-ssm-agent[2102]: 2025-02-13 18:51:52 INFO [CredentialRefresher] credentialRefresher has started
Feb 13 18:51:52.194294 amazon-ssm-agent[2102]: 2025-02-13 18:51:52 INFO [CredentialRefresher] Starting credentials refresher loop
Feb 13 18:51:52.194294 amazon-ssm-agent[2102]: 2025-02-13 18:51:52 INFO EC2RoleProvider Successfully connected with instance profile role credentials
Feb 13 18:51:52.255370 amazon-ssm-agent[2102]: 2025-02-13 18:51:52 INFO [CredentialRefresher] Next credential rotation will be in 31.516656080366666 minutes
Feb 13 18:51:52.323394 sshd[2167]: Accepted publickey for core from 139.178.68.195 port 51692 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:51:52.327135 sshd-session[2167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:51:52.337530 systemd-logind[1928]: New session 2 of user core.
Feb 13 18:51:52.345366 systemd[1]: Started session-2.scope - Session 2 of User core.
Feb 13 18:51:52.477065 sshd[2169]: Connection closed by 139.178.68.195 port 51692
Feb 13 18:51:52.479284 sshd-session[2167]: pam_unix(sshd:session): session closed for user core
Feb 13 18:51:52.486329 systemd-logind[1928]: Session 2 logged out. Waiting for processes to exit.
Feb 13 18:51:52.487873 systemd[1]: sshd@1-172.31.21.163:22-139.178.68.195:51692.service: Deactivated successfully.
Feb 13 18:51:52.491387 systemd[1]: session-2.scope: Deactivated successfully.
Feb 13 18:51:52.494531 systemd-logind[1928]: Removed session 2.
Feb 13 18:51:52.518678 systemd[1]: Started sshd@2-172.31.21.163:22-139.178.68.195:51702.service - OpenSSH per-connection server daemon (139.178.68.195:51702).
Feb 13 18:51:52.637663 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:51:52.645139 systemd[1]: Reached target multi-user.target - Multi-User System.
Feb 13 18:51:52.676086 systemd[1]: Startup finished in 1.149s (kernel) + 6.924s (initrd) + 8.014s (userspace) = 16.089s.
Feb 13 18:51:52.690763 agetty[2150]: failed to open credentials directory
Feb 13 18:51:52.692467 agetty[2151]: failed to open credentials directory
Feb 13 18:51:52.693759 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:51:52.758804 sshd[2174]: Accepted publickey for core from 139.178.68.195 port 51702 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:51:52.760701 sshd-session[2174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:51:52.769624 systemd-logind[1928]: New session 3 of user core.
Feb 13 18:51:52.777258 systemd[1]: Started session-3.scope - Session 3 of User core.
Feb 13 18:51:52.803243 ntpd[1921]: Listen normally on 6 eth0 [fe80::424:5dff:fe85:8751%2]:123
Feb 13 18:51:52.803858 ntpd[1921]: 13 Feb 18:51:52 ntpd[1921]: Listen normally on 6 eth0 [fe80::424:5dff:fe85:8751%2]:123
Feb 13 18:51:52.907103 sshd[2186]: Connection closed by 139.178.68.195 port 51702
Feb 13 18:51:52.908324 sshd-session[2174]: pam_unix(sshd:session): session closed for user core
Feb 13 18:51:52.918134 systemd[1]: sshd@2-172.31.21.163:22-139.178.68.195:51702.service: Deactivated successfully.
Feb 13 18:51:52.922567 systemd[1]: session-3.scope: Deactivated successfully.
Feb 13 18:51:52.925898 systemd-logind[1928]: Session 3 logged out. Waiting for processes to exit.
Feb 13 18:51:52.928772 systemd-logind[1928]: Removed session 3.
Feb 13 18:51:53.223514 amazon-ssm-agent[2102]: 2025-02-13 18:51:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process
Feb 13 18:51:53.325771 amazon-ssm-agent[2102]: 2025-02-13 18:51:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2195) started
Feb 13 18:51:53.426937 amazon-ssm-agent[2102]: 2025-02-13 18:51:53 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds
Feb 13 18:51:53.718021 kubelet[2181]: E0213 18:51:53.717015    2181 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:51:53.720517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:51:53.720833 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:51:53.721512 systemd[1]: kubelet.service: Consumed 1.349s CPU time.
Feb 13 18:51:56.573336 systemd-resolved[1862]: Clock change detected. Flushing caches.
Feb 13 18:52:02.720482 systemd[1]: Started sshd@3-172.31.21.163:22-139.178.68.195:46422.service - OpenSSH per-connection server daemon (139.178.68.195:46422).
Feb 13 18:52:02.909097 sshd[2212]: Accepted publickey for core from 139.178.68.195 port 46422 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:52:02.911482 sshd-session[2212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:52:02.919228 systemd-logind[1928]: New session 4 of user core.
Feb 13 18:52:02.929064 systemd[1]: Started session-4.scope - Session 4 of User core.
Feb 13 18:52:03.057911 sshd[2214]: Connection closed by 139.178.68.195 port 46422
Feb 13 18:52:03.059076 sshd-session[2212]: pam_unix(sshd:session): session closed for user core
Feb 13 18:52:03.065488 systemd[1]: sshd@3-172.31.21.163:22-139.178.68.195:46422.service: Deactivated successfully.
Feb 13 18:52:03.069265 systemd[1]: session-4.scope: Deactivated successfully.
Feb 13 18:52:03.071735 systemd-logind[1928]: Session 4 logged out. Waiting for processes to exit.
Feb 13 18:52:03.073670 systemd-logind[1928]: Removed session 4.
Feb 13 18:52:03.099381 systemd[1]: Started sshd@4-172.31.21.163:22-139.178.68.195:46428.service - OpenSSH per-connection server daemon (139.178.68.195:46428).
Feb 13 18:52:03.287092 sshd[2219]: Accepted publickey for core from 139.178.68.195 port 46428 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:52:03.289783 sshd-session[2219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:52:03.297155 systemd-logind[1928]: New session 5 of user core.
Feb 13 18:52:03.308240 systemd[1]: Started session-5.scope - Session 5 of User core.
Feb 13 18:52:03.424531 sshd[2221]: Connection closed by 139.178.68.195 port 46428
Feb 13 18:52:03.425477 sshd-session[2219]: pam_unix(sshd:session): session closed for user core
Feb 13 18:52:03.431611 systemd[1]: sshd@4-172.31.21.163:22-139.178.68.195:46428.service: Deactivated successfully.
Feb 13 18:52:03.436162 systemd[1]: session-5.scope: Deactivated successfully.
Feb 13 18:52:03.438066 systemd-logind[1928]: Session 5 logged out. Waiting for processes to exit.
Feb 13 18:52:03.440151 systemd-logind[1928]: Removed session 5.
Feb 13 18:52:03.464561 systemd[1]: Started sshd@5-172.31.21.163:22-139.178.68.195:46438.service - OpenSSH per-connection server daemon (139.178.68.195:46438).
Feb 13 18:52:03.616648 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Feb 13 18:52:03.626229 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:52:03.651867 sshd[2226]: Accepted publickey for core from 139.178.68.195 port 46438 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:52:03.652632 sshd-session[2226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:52:03.663985 systemd-logind[1928]: New session 6 of user core.
Feb 13 18:52:03.672124 systemd[1]: Started session-6.scope - Session 6 of User core.
Feb 13 18:52:03.807864 sshd[2231]: Connection closed by 139.178.68.195 port 46438
Feb 13 18:52:03.809651 sshd-session[2226]: pam_unix(sshd:session): session closed for user core
Feb 13 18:52:03.815336 systemd[1]: session-6.scope: Deactivated successfully.
Feb 13 18:52:03.816788 systemd[1]: sshd@5-172.31.21.163:22-139.178.68.195:46438.service: Deactivated successfully.
Feb 13 18:52:03.825293 systemd-logind[1928]: Session 6 logged out. Waiting for processes to exit.
Feb 13 18:52:03.828578 systemd-logind[1928]: Removed session 6.
Feb 13 18:52:03.856405 systemd[1]: Started sshd@6-172.31.21.163:22-139.178.68.195:46446.service - OpenSSH per-connection server daemon (139.178.68.195:46446).
Feb 13 18:52:03.940139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:52:03.942743 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Feb 13 18:52:04.022636 kubelet[2243]: E0213 18:52:04.022545    2243 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Feb 13 18:52:04.029966 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Feb 13 18:52:04.030292 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Feb 13 18:52:04.066484 sshd[2236]: Accepted publickey for core from 139.178.68.195 port 46446 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:52:04.069359 sshd-session[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:52:04.079105 systemd-logind[1928]: New session 7 of user core.
Feb 13 18:52:04.086125 systemd[1]: Started session-7.scope - Session 7 of User core.
Feb 13 18:52:04.203452 sudo[2252]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Feb 13 18:52:04.204730 sudo[2252]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 18:52:04.225060 sudo[2252]: pam_unix(sudo:session): session closed for user root
Feb 13 18:52:04.249471 sshd[2251]: Connection closed by 139.178.68.195 port 46446
Feb 13 18:52:04.249954 sshd-session[2236]: pam_unix(sshd:session): session closed for user core
Feb 13 18:52:04.257227 systemd-logind[1928]: Session 7 logged out. Waiting for processes to exit.
Feb 13 18:52:04.257998 systemd[1]: sshd@6-172.31.21.163:22-139.178.68.195:46446.service: Deactivated successfully.
Feb 13 18:52:04.261455 systemd[1]: session-7.scope: Deactivated successfully.
Feb 13 18:52:04.266017 systemd-logind[1928]: Removed session 7.
Feb 13 18:52:04.293337 systemd[1]: Started sshd@7-172.31.21.163:22-139.178.68.195:46458.service - OpenSSH per-connection server daemon (139.178.68.195:46458).
Feb 13 18:52:04.482980 sshd[2257]: Accepted publickey for core from 139.178.68.195 port 46458 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:52:04.485433 sshd-session[2257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:52:04.494223 systemd-logind[1928]: New session 8 of user core.
Feb 13 18:52:04.501149 systemd[1]: Started session-8.scope - Session 8 of User core.
Feb 13 18:52:04.609987 sudo[2261]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Feb 13 18:52:04.611196 sudo[2261]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 18:52:04.618162 sudo[2261]: pam_unix(sudo:session): session closed for user root
Feb 13 18:52:04.628995 sudo[2260]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Feb 13 18:52:04.629627 sudo[2260]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 18:52:04.654398 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Feb 13 18:52:04.703455 augenrules[2283]: No rules
Feb 13 18:52:04.705784 systemd[1]: audit-rules.service: Deactivated successfully.
Feb 13 18:52:04.706339 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Feb 13 18:52:04.708467 sudo[2260]: pam_unix(sudo:session): session closed for user root
Feb 13 18:52:04.733119 sshd[2259]: Connection closed by 139.178.68.195 port 46458
Feb 13 18:52:04.734645 sshd-session[2257]: pam_unix(sshd:session): session closed for user core
Feb 13 18:52:04.740621 systemd-logind[1928]: Session 8 logged out. Waiting for processes to exit.
Feb 13 18:52:04.741777 systemd[1]: sshd@7-172.31.21.163:22-139.178.68.195:46458.service: Deactivated successfully.
Feb 13 18:52:04.745496 systemd[1]: session-8.scope: Deactivated successfully.
Feb 13 18:52:04.747478 systemd-logind[1928]: Removed session 8.
Feb 13 18:52:04.771348 systemd[1]: Started sshd@8-172.31.21.163:22-139.178.68.195:46466.service - OpenSSH per-connection server daemon (139.178.68.195:46466).
Feb 13 18:52:04.954946 sshd[2291]: Accepted publickey for core from 139.178.68.195 port 46466 ssh2: RSA SHA256:XEROeIWkc72PDLH9n7zHrDYR35YLR9YDpRI11EXJY0s
Feb 13 18:52:04.957700 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Feb 13 18:52:04.966615 systemd-logind[1928]: New session 9 of user core.
Feb 13 18:52:04.978060 systemd[1]: Started session-9.scope - Session 9 of User core.
Feb 13 18:52:05.084477 sudo[2294]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Feb 13 18:52:05.085652 sudo[2294]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Feb 13 18:52:06.121781 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:52:06.131481 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:52:06.177102 systemd[1]: Reloading requested from client PID 2331 ('systemctl') (unit session-9.scope)...
Feb 13 18:52:06.177306 systemd[1]: Reloading...
Feb 13 18:52:06.428900 zram_generator::config[2377]: No configuration found.
Feb 13 18:52:06.663146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Feb 13 18:52:06.833663 systemd[1]: Reloading finished in 655 ms.
Feb 13 18:52:06.933947 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM
Feb 13 18:52:06.934157 systemd[1]: kubelet.service: Failed with result 'signal'.
Feb 13 18:52:06.935923 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:52:06.948649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Feb 13 18:52:07.237153 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Feb 13 18:52:07.253547 (kubelet)[2435]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Feb 13 18:52:07.330690 kubelet[2435]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 18:52:07.330690 kubelet[2435]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Feb 13 18:52:07.330690 kubelet[2435]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Feb 13 18:52:07.331331 kubelet[2435]: I0213 18:52:07.330797    2435 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Feb 13 18:52:08.673912 kubelet[2435]: I0213 18:52:08.673658    2435 server.go:484] "Kubelet version" kubeletVersion="v1.30.1"
Feb 13 18:52:08.673912 kubelet[2435]: I0213 18:52:08.673701    2435 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Feb 13 18:52:08.674527 kubelet[2435]: I0213 18:52:08.674104    2435 server.go:927] "Client rotation is on, will bootstrap in background"
Feb 13 18:52:08.705677 kubelet[2435]: I0213 18:52:08.704714    2435 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Feb 13 18:52:08.720486 kubelet[2435]: I0213 18:52:08.720286    2435 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Feb 13 18:52:08.723231 kubelet[2435]: I0213 18:52:08.723130    2435 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Feb 13 18:52:08.724477 kubelet[2435]: I0213 18:52:08.723404    2435 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.21.163","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
Feb 13 18:52:08.724477 kubelet[2435]: I0213 18:52:08.723867    2435 topology_manager.go:138] "Creating topology manager with none policy"
Feb 13 18:52:08.724477 kubelet[2435]: I0213 18:52:08.723891    2435 container_manager_linux.go:301] "Creating device plugin manager"
Feb 13 18:52:08.724477 kubelet[2435]: I0213 18:52:08.724228    2435 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 18:52:08.725774 kubelet[2435]: I0213 18:52:08.725722    2435 kubelet.go:400] "Attempting to sync node with API server"
Feb 13 18:52:08.726099 kubelet[2435]: I0213 18:52:08.726072    2435 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
Feb 13 18:52:08.726324 kubelet[2435]: I0213 18:52:08.726302    2435 kubelet.go:312] "Adding apiserver pod source"
Feb 13 18:52:08.726479 kubelet[2435]: I0213 18:52:08.726460    2435 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Feb 13 18:52:08.726851 kubelet[2435]: E0213 18:52:08.726785    2435 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:08.727366 kubelet[2435]: E0213 18:52:08.727338    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:08.728869 kubelet[2435]: I0213 18:52:08.728785    2435 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Feb 13 18:52:08.729332 kubelet[2435]: I0213 18:52:08.729304    2435 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Feb 13 18:52:08.729450 kubelet[2435]: W0213 18:52:08.729402    2435 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Feb 13 18:52:08.730570 kubelet[2435]: I0213 18:52:08.730521    2435 server.go:1264] "Started kubelet"
Feb 13 18:52:08.732169 kubelet[2435]: I0213 18:52:08.731979    2435 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Feb 13 18:52:08.735115 kubelet[2435]: I0213 18:52:08.735017    2435 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Feb 13 18:52:08.736863 kubelet[2435]: I0213 18:52:08.735468    2435 server.go:455] "Adding debug handlers to kubelet server"
Feb 13 18:52:08.736863 kubelet[2435]: I0213 18:52:08.735573    2435 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Feb 13 18:52:08.739555 kubelet[2435]: I0213 18:52:08.739493    2435 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Feb 13 18:52:08.756994 kubelet[2435]: I0213 18:52:08.756863    2435 volume_manager.go:291] "Starting Kubelet Volume Manager"
Feb 13 18:52:08.760355 kubelet[2435]: I0213 18:52:08.760254    2435 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Feb 13 18:52:08.764080 kubelet[2435]: I0213 18:52:08.762390    2435 reconciler.go:26] "Reconciler: start to sync state"
Feb 13 18:52:08.764736 kubelet[2435]: E0213 18:52:08.764692    2435 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Feb 13 18:52:08.765097 kubelet[2435]: I0213 18:52:08.765021    2435 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Feb 13 18:52:08.767355 kubelet[2435]: I0213 18:52:08.767303    2435 factory.go:221] Registration of the containerd container factory successfully
Feb 13 18:52:08.767355 kubelet[2435]: I0213 18:52:08.767340    2435 factory.go:221] Registration of the systemd container factory successfully
Feb 13 18:52:08.795328 kubelet[2435]: W0213 18:52:08.795239    2435 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.21.163" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 18:52:08.795328 kubelet[2435]: E0213 18:52:08.795399    2435 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.21.163" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Feb 13 18:52:08.796387 kubelet[2435]: E0213 18:52:08.796070    2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.21.163.1823d93b0c75d06c  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.21.163,UID:172.31.21.163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172.31.21.163,},FirstTimestamp:2025-02-13 18:52:08.73048894 +0000 UTC m=+1.470038444,LastTimestamp:2025-02-13 18:52:08.73048894 +0000 UTC m=+1.470038444,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.21.163,}"
Feb 13 18:52:08.804172 kubelet[2435]: I0213 18:52:08.804124    2435 cpu_manager.go:214] "Starting CPU manager" policy="none"
Feb 13 18:52:08.804172 kubelet[2435]: I0213 18:52:08.804155    2435 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Feb 13 18:52:08.804471 kubelet[2435]: I0213 18:52:08.804187    2435 state_mem.go:36] "Initialized new in-memory state store"
Feb 13 18:52:08.807625 kubelet[2435]: I0213 18:52:08.807583    2435 policy_none.go:49] "None policy: Start"
Feb 13 18:52:08.809262 kubelet[2435]: I0213 18:52:08.809224    2435 memory_manager.go:170] "Starting memorymanager" policy="None"
Feb 13 18:52:08.809393 kubelet[2435]: I0213 18:52:08.809272    2435 state_mem.go:35] "Initializing new in-memory state store"
Feb 13 18:52:08.824745 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Feb 13 18:52:08.843589 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Feb 13 18:52:08.858953 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Feb 13 18:52:08.860695 kubelet[2435]: I0213 18:52:08.860453    2435 kubelet_node_status.go:73] "Attempting to register node" node="172.31.21.163"
Feb 13 18:52:08.867393 kubelet[2435]: I0213 18:52:08.866993    2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Feb 13 18:52:08.870918 kubelet[2435]: I0213 18:52:08.869574    2435 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Feb 13 18:52:08.870918 kubelet[2435]: I0213 18:52:08.870085    2435 status_manager.go:217] "Starting to sync pod status with apiserver"
Feb 13 18:52:08.870918 kubelet[2435]: I0213 18:52:08.870135    2435 kubelet.go:2337] "Starting kubelet main sync loop"
Feb 13 18:52:08.870918 kubelet[2435]: I0213 18:52:08.870237    2435 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Feb 13 18:52:08.870918 kubelet[2435]: E0213 18:52:08.870233    2435 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Feb 13 18:52:08.870918 kubelet[2435]: I0213 18:52:08.870615    2435 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Feb 13 18:52:08.870918 kubelet[2435]: I0213 18:52:08.870772    2435 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Feb 13 18:52:08.881046 kubelet[2435]: E0213 18:52:08.880980    2435 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.21.163\" not found"
Feb 13 18:52:08.890162 kubelet[2435]: W0213 18:52:08.890100    2435 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 18:52:08.890162 kubelet[2435]: E0213 18:52:08.890166    2435 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Feb 13 18:52:08.891682 kubelet[2435]: E0213 18:52:08.891633    2435 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.21.163\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms"
Feb 13 18:52:08.892010 kubelet[2435]: W0213 18:52:08.891876    2435 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 13 18:52:08.892010 kubelet[2435]: E0213 18:52:08.891927    2435 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
Feb 13 18:52:08.892010 kubelet[2435]: E0213 18:52:08.891975    2435 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.21.163"
Feb 13 18:52:08.892432 kubelet[2435]: E0213 18:52:08.892296    2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.21.163.1823d93b0e7f5be4  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.21.163,UID:172.31.21.163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:172.31.21.163,},FirstTimestamp:2025-02-13 18:52:08.7646689 +0000 UTC m=+1.504218428,LastTimestamp:2025-02-13 18:52:08.7646689 +0000 UTC m=+1.504218428,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.21.163,}"
Feb 13 18:52:08.972465 kubelet[2435]: E0213 18:52:08.971409    2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.21.163.1823d93b10bf89c8  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.21.163,UID:172.31.21.163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 172.31.21.163 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:172.31.21.163,},FirstTimestamp:2025-02-13 18:52:08.802429384 +0000 UTC m=+1.541978900,LastTimestamp:2025-02-13 18:52:08.802429384 +0000 UTC m=+1.541978900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.21.163,}"
Feb 13 18:52:08.972465 kubelet[2435]: W0213 18:52:08.972038    2435 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb 13 18:52:08.972465 kubelet[2435]: E0213 18:52:08.972078    2435 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope
Feb 13 18:52:09.009944 kubelet[2435]: E0213 18:52:09.009747    2435 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{172.31.21.163.1823d93b10bff460  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172.31.21.163,UID:172.31.21.163,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 172.31.21.163 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:172.31.21.163,},FirstTimestamp:2025-02-13 18:52:08.802456672 +0000 UTC m=+1.542006188,LastTimestamp:2025-02-13 18:52:08.802456672 +0000 UTC m=+1.542006188,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172.31.21.163,}"
Feb 13 18:52:09.094075 kubelet[2435]: I0213 18:52:09.093772    2435 kubelet_node_status.go:73] "Attempting to register node" node="172.31.21.163"
Feb 13 18:52:09.116643 kubelet[2435]: I0213 18:52:09.116590    2435 kubelet_node_status.go:76] "Successfully registered node" node="172.31.21.163"
Feb 13 18:52:09.205492 kubelet[2435]: E0213 18:52:09.205440    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.163\" not found"
Feb 13 18:52:09.306432 kubelet[2435]: E0213 18:52:09.306297    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.163\" not found"
Feb 13 18:52:09.406887 kubelet[2435]: E0213 18:52:09.406820    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.163\" not found"
Feb 13 18:52:09.507419 kubelet[2435]: E0213 18:52:09.507371    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.163\" not found"
Feb 13 18:52:09.608153 kubelet[2435]: E0213 18:52:09.608039    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.163\" not found"
Feb 13 18:52:09.677817 kubelet[2435]: I0213 18:52:09.677673    2435 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Feb 13 18:52:09.709077 kubelet[2435]: E0213 18:52:09.709032    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.163\" not found"
Feb 13 18:52:09.728330 kubelet[2435]: E0213 18:52:09.728290    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:09.800222 sudo[2294]: pam_unix(sudo:session): session closed for user root
Feb 13 18:52:09.809942 kubelet[2435]: E0213 18:52:09.809892    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.163\" not found"
Feb 13 18:52:09.824383 sshd[2293]: Connection closed by 139.178.68.195 port 46466
Feb 13 18:52:09.824084 sshd-session[2291]: pam_unix(sshd:session): session closed for user core
Feb 13 18:52:09.831296 systemd[1]: sshd@8-172.31.21.163:22-139.178.68.195:46466.service: Deactivated successfully.
Feb 13 18:52:09.836583 systemd[1]: session-9.scope: Deactivated successfully.
Feb 13 18:52:09.840621 systemd-logind[1928]: Session 9 logged out. Waiting for processes to exit.
Feb 13 18:52:09.843098 systemd-logind[1928]: Removed session 9.
Feb 13 18:52:09.910228 kubelet[2435]: E0213 18:52:09.910040    2435 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"172.31.21.163\" not found"
Feb 13 18:52:10.011081 kubelet[2435]: I0213 18:52:10.011028    2435 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Feb 13 18:52:10.011517 containerd[1948]: time="2025-02-13T18:52:10.011468210Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Feb 13 18:52:10.012257 kubelet[2435]: I0213 18:52:10.011767    2435 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Feb 13 18:52:10.728561 kubelet[2435]: I0213 18:52:10.728509    2435 apiserver.go:52] "Watching apiserver"
Feb 13 18:52:10.729185 kubelet[2435]: E0213 18:52:10.728939    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:10.740095 kubelet[2435]: I0213 18:52:10.740029    2435 topology_manager.go:215] "Topology Admit Handler" podUID="4adcfbb1-1c66-41c1-82e6-b44f21d4be22" podNamespace="calico-system" podName="calico-node-zn5w7"
Feb 13 18:52:10.740233 kubelet[2435]: I0213 18:52:10.740183    2435 topology_manager.go:215] "Topology Admit Handler" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a" podNamespace="calico-system" podName="csi-node-driver-5lvxl"
Feb 13 18:52:10.741408 kubelet[2435]: I0213 18:52:10.740298    2435 topology_manager.go:215] "Topology Admit Handler" podUID="3ca7037a-c498-445d-9bf1-8c65ec4f39b1" podNamespace="kube-system" podName="kube-proxy-fqc92"
Feb 13 18:52:10.741408 kubelet[2435]: E0213 18:52:10.740552    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:10.751970 systemd[1]: Created slice kubepods-besteffort-pod4adcfbb1_1c66_41c1_82e6_b44f21d4be22.slice - libcontainer container kubepods-besteffort-pod4adcfbb1_1c66_41c1_82e6_b44f21d4be22.slice.
Feb 13 18:52:10.761962 kubelet[2435]: I0213 18:52:10.761911    2435 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Feb 13 18:52:10.773629 kubelet[2435]: I0213 18:52:10.773225    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a-varrun\") pod \"csi-node-driver-5lvxl\" (UID: \"32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a\") " pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:10.773629 kubelet[2435]: I0213 18:52:10.773478    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a-kubelet-dir\") pod \"csi-node-driver-5lvxl\" (UID: \"32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a\") " pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:10.773629 kubelet[2435]: I0213 18:52:10.773553    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3ca7037a-c498-445d-9bf1-8c65ec4f39b1-xtables-lock\") pod \"kube-proxy-fqc92\" (UID: \"3ca7037a-c498-445d-9bf1-8c65ec4f39b1\") " pod="kube-system/kube-proxy-fqc92"
Feb 13 18:52:10.774529 kubelet[2435]: I0213 18:52:10.774113    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3ca7037a-c498-445d-9bf1-8c65ec4f39b1-kube-proxy\") pod \"kube-proxy-fqc92\" (UID: \"3ca7037a-c498-445d-9bf1-8c65ec4f39b1\") " pod="kube-system/kube-proxy-fqc92"
Feb 13 18:52:10.774529 kubelet[2435]: I0213 18:52:10.774189    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3ca7037a-c498-445d-9bf1-8c65ec4f39b1-lib-modules\") pod \"kube-proxy-fqc92\" (UID: \"3ca7037a-c498-445d-9bf1-8c65ec4f39b1\") " pod="kube-system/kube-proxy-fqc92"
Feb 13 18:52:10.774529 kubelet[2435]: I0213 18:52:10.774235    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkdgp\" (UniqueName: \"kubernetes.io/projected/3ca7037a-c498-445d-9bf1-8c65ec4f39b1-kube-api-access-kkdgp\") pod \"kube-proxy-fqc92\" (UID: \"3ca7037a-c498-445d-9bf1-8c65ec4f39b1\") " pod="kube-system/kube-proxy-fqc92"
Feb 13 18:52:10.774529 kubelet[2435]: I0213 18:52:10.774298    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-policysync\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.774529 kubelet[2435]: I0213 18:52:10.774357    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-node-certs\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.774789 kubelet[2435]: I0213 18:52:10.774395    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-var-lib-calico\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.774789 kubelet[2435]: I0213 18:52:10.774463    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-cni-net-dir\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.775312 kubelet[2435]: I0213 18:52:10.774928    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a-registration-dir\") pod \"csi-node-driver-5lvxl\" (UID: \"32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a\") " pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:10.775312 kubelet[2435]: I0213 18:52:10.774994    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-lib-modules\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.775312 kubelet[2435]: I0213 18:52:10.775032    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-var-run-calico\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.775312 kubelet[2435]: I0213 18:52:10.775097    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-cni-bin-dir\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.775312 kubelet[2435]: I0213 18:52:10.775137    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlg8m\" (UniqueName: \"kubernetes.io/projected/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-kube-api-access-wlg8m\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.775604 kubelet[2435]: I0213 18:52:10.775201    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a-socket-dir\") pod \"csi-node-driver-5lvxl\" (UID: \"32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a\") " pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:10.776884 kubelet[2435]: I0213 18:52:10.775700    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-xtables-lock\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.776884 kubelet[2435]: I0213 18:52:10.775785    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-tigera-ca-bundle\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.776884 kubelet[2435]: I0213 18:52:10.776203    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-cni-log-dir\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.776884 kubelet[2435]: I0213 18:52:10.776264    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4adcfbb1-1c66-41c1-82e6-b44f21d4be22-flexvol-driver-host\") pod \"calico-node-zn5w7\" (UID: \"4adcfbb1-1c66-41c1-82e6-b44f21d4be22\") " pod="calico-system/calico-node-zn5w7"
Feb 13 18:52:10.776884 kubelet[2435]: I0213 18:52:10.776315    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5tzm\" (UniqueName: \"kubernetes.io/projected/32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a-kube-api-access-q5tzm\") pod \"csi-node-driver-5lvxl\" (UID: \"32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a\") " pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:10.776246 systemd[1]: Created slice kubepods-besteffort-pod3ca7037a_c498_445d_9bf1_8c65ec4f39b1.slice - libcontainer container kubepods-besteffort-pod3ca7037a_c498_445d_9bf1_8c65ec4f39b1.slice.
Feb 13 18:52:10.889913 kubelet[2435]: E0213 18:52:10.889365    2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 18:52:10.889913 kubelet[2435]: W0213 18:52:10.889400    2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 18:52:10.889913 kubelet[2435]: E0213 18:52:10.889444    2435 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 18:52:10.893666 kubelet[2435]: E0213 18:52:10.893615    2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 18:52:10.893666 kubelet[2435]: W0213 18:52:10.893651    2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 18:52:10.893916 kubelet[2435]: E0213 18:52:10.893686    2435 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 18:52:10.972997 kubelet[2435]: E0213 18:52:10.969960    2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 18:52:10.972997 kubelet[2435]: W0213 18:52:10.969994    2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 18:52:10.972997 kubelet[2435]: E0213 18:52:10.970025    2435 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 18:52:10.977659 kubelet[2435]: E0213 18:52:10.977620    2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 18:52:10.978045 kubelet[2435]: W0213 18:52:10.977992    2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 18:52:10.978195 kubelet[2435]: E0213 18:52:10.978170    2435 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 18:52:10.981348 kubelet[2435]: E0213 18:52:10.981202    2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 18:52:10.981348 kubelet[2435]: W0213 18:52:10.981240    2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 18:52:10.981348 kubelet[2435]: E0213 18:52:10.981282    2435 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 18:52:10.982510 kubelet[2435]: E0213 18:52:10.982467    2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 18:52:10.982510 kubelet[2435]: W0213 18:52:10.982501    2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 18:52:10.982740 kubelet[2435]: E0213 18:52:10.982532    2435 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 18:52:10.998167 kubelet[2435]: E0213 18:52:10.998117    2435 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Feb 13 18:52:10.998167 kubelet[2435]: W0213 18:52:10.998154    2435 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Feb 13 18:52:10.998328 kubelet[2435]: E0213 18:52:10.998188    2435 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Feb 13 18:52:11.072862 containerd[1948]: time="2025-02-13T18:52:11.072786208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zn5w7,Uid:4adcfbb1-1c66-41c1-82e6-b44f21d4be22,Namespace:calico-system,Attempt:0,}"
Feb 13 18:52:11.091588 containerd[1948]: time="2025-02-13T18:52:11.091122724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fqc92,Uid:3ca7037a-c498-445d-9bf1-8c65ec4f39b1,Namespace:kube-system,Attempt:0,}"
Feb 13 18:52:11.643864 containerd[1948]: time="2025-02-13T18:52:11.643718058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:52:11.647350 containerd[1948]: time="2025-02-13T18:52:11.647281770Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:52:11.648386 containerd[1948]: time="2025-02-13T18:52:11.648238530Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173"
Feb 13 18:52:11.649533 containerd[1948]: time="2025-02-13T18:52:11.649458306Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Feb 13 18:52:11.650397 containerd[1948]: time="2025-02-13T18:52:11.650323170Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:52:11.656291 containerd[1948]: time="2025-02-13T18:52:11.656194566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Feb 13 18:52:11.658975 containerd[1948]: time="2025-02-13T18:52:11.658307514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.377078ms"
Feb 13 18:52:11.663331 containerd[1948]: time="2025-02-13T18:52:11.663105930Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 571.865774ms"
Feb 13 18:52:11.762309 kubelet[2435]: E0213 18:52:11.762225    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:11.867818 containerd[1948]: time="2025-02-13T18:52:11.867434263Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:52:11.867818 containerd[1948]: time="2025-02-13T18:52:11.867542239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:52:11.867818 containerd[1948]: time="2025-02-13T18:52:11.867568519Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:11.867818 containerd[1948]: time="2025-02-13T18:52:11.867691891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:11.871030 kubelet[2435]: E0213 18:52:11.870483    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:11.878648 containerd[1948]: time="2025-02-13T18:52:11.878131772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:52:11.878648 containerd[1948]: time="2025-02-13T18:52:11.878251352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:52:11.878648 containerd[1948]: time="2025-02-13T18:52:11.878288084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:11.878648 containerd[1948]: time="2025-02-13T18:52:11.878443016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:11.894323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount633879667.mount: Deactivated successfully.
Feb 13 18:52:11.994978 systemd[1]: run-containerd-runc-k8s.io-71d3d75365efdf5bc629e19bd63e80a1e4c7518c66f7d54779708bfda1f2403d-runc.j1zpvt.mount: Deactivated successfully.
Feb 13 18:52:12.014208 systemd[1]: Started cri-containerd-3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2.scope - libcontainer container 3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2.
Feb 13 18:52:12.018271 systemd[1]: Started cri-containerd-71d3d75365efdf5bc629e19bd63e80a1e4c7518c66f7d54779708bfda1f2403d.scope - libcontainer container 71d3d75365efdf5bc629e19bd63e80a1e4c7518c66f7d54779708bfda1f2403d.
Feb 13 18:52:12.089028 containerd[1948]: time="2025-02-13T18:52:12.088912301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zn5w7,Uid:4adcfbb1-1c66-41c1-82e6-b44f21d4be22,Namespace:calico-system,Attempt:0,} returns sandbox id \"3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2\""
Feb 13 18:52:12.096701 containerd[1948]: time="2025-02-13T18:52:12.096637337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Feb 13 18:52:12.099022 containerd[1948]: time="2025-02-13T18:52:12.098734973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fqc92,Uid:3ca7037a-c498-445d-9bf1-8c65ec4f39b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"71d3d75365efdf5bc629e19bd63e80a1e4c7518c66f7d54779708bfda1f2403d\""
Feb 13 18:52:12.763388 kubelet[2435]: E0213 18:52:12.763327    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:13.429730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2135173788.mount: Deactivated successfully.
Feb 13 18:52:13.549347 containerd[1948]: time="2025-02-13T18:52:13.549277172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:13.550928 containerd[1948]: time="2025-02-13T18:52:13.550850732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603"
Feb 13 18:52:13.552652 containerd[1948]: time="2025-02-13T18:52:13.552560216Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:13.557752 containerd[1948]: time="2025-02-13T18:52:13.557660228Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:13.559340 containerd[1948]: time="2025-02-13T18:52:13.559016792Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.462290775s"
Feb 13 18:52:13.559340 containerd[1948]: time="2025-02-13T18:52:13.559094300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\""
Feb 13 18:52:13.562131 containerd[1948]: time="2025-02-13T18:52:13.562025636Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\""
Feb 13 18:52:13.565383 containerd[1948]: time="2025-02-13T18:52:13.565199600Z" level=info msg="CreateContainer within sandbox \"3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Feb 13 18:52:13.594992 containerd[1948]: time="2025-02-13T18:52:13.594783932Z" level=info msg="CreateContainer within sandbox \"3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956\""
Feb 13 18:52:13.598313 containerd[1948]: time="2025-02-13T18:52:13.596993444Z" level=info msg="StartContainer for \"5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956\""
Feb 13 18:52:13.652175 systemd[1]: Started cri-containerd-5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956.scope - libcontainer container 5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956.
Feb 13 18:52:13.712601 containerd[1948]: time="2025-02-13T18:52:13.712154049Z" level=info msg="StartContainer for \"5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956\" returns successfully"
Feb 13 18:52:13.741305 systemd[1]: cri-containerd-5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956.scope: Deactivated successfully.
Feb 13 18:52:13.764134 kubelet[2435]: E0213 18:52:13.764025    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:13.828388 containerd[1948]: time="2025-02-13T18:52:13.828244557Z" level=info msg="shim disconnected" id=5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956 namespace=k8s.io
Feb 13 18:52:13.828388 containerd[1948]: time="2025-02-13T18:52:13.828379953Z" level=warning msg="cleaning up after shim disconnected" id=5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956 namespace=k8s.io
Feb 13 18:52:13.828936 containerd[1948]: time="2025-02-13T18:52:13.828405933Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 18:52:13.871949 kubelet[2435]: E0213 18:52:13.871313    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:14.392589 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f91f8e25aa9a22fefc327d40e2d44456d46988c62aef7bed14f64cdbf11e956-rootfs.mount: Deactivated successfully.
Feb 13 18:52:14.765136 kubelet[2435]: E0213 18:52:14.765054    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:14.900111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041090066.mount: Deactivated successfully.
Feb 13 18:52:15.390220 containerd[1948]: time="2025-02-13T18:52:15.389949153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:15.395799 containerd[1948]: time="2025-02-13T18:52:15.395695593Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.10: active requests=0, bytes read=25663370"
Feb 13 18:52:15.399100 containerd[1948]: time="2025-02-13T18:52:15.398219589Z" level=info msg="ImageCreate event name:\"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:15.404479 containerd[1948]: time="2025-02-13T18:52:15.404406489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:15.406061 containerd[1948]: time="2025-02-13T18:52:15.406007769Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.10\" with image id \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\", repo tag \"registry.k8s.io/kube-proxy:v1.30.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:d112e804e548fce28d9f1e3282c9ce54e374451e6a2c41b1ca9d7fca5d1fcc48\", size \"25662389\" in 1.843921089s"
Feb 13 18:52:15.406270 containerd[1948]: time="2025-02-13T18:52:15.406237629Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.10\" returns image reference \"sha256:fa8af75a6512774cc93242474a9841ace82a7d0646001149fc65d92a8bb0c00a\""
Feb 13 18:52:15.408761 containerd[1948]: time="2025-02-13T18:52:15.408565053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Feb 13 18:52:15.410475 containerd[1948]: time="2025-02-13T18:52:15.410419041Z" level=info msg="CreateContainer within sandbox \"71d3d75365efdf5bc629e19bd63e80a1e4c7518c66f7d54779708bfda1f2403d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Feb 13 18:52:15.429542 containerd[1948]: time="2025-02-13T18:52:15.429466281Z" level=info msg="CreateContainer within sandbox \"71d3d75365efdf5bc629e19bd63e80a1e4c7518c66f7d54779708bfda1f2403d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"245e15316cfd11dd073361f6adcb871d1ea9782407dc58014b1c3d2ec66f19fb\""
Feb 13 18:52:15.430843 containerd[1948]: time="2025-02-13T18:52:15.430778733Z" level=info msg="StartContainer for \"245e15316cfd11dd073361f6adcb871d1ea9782407dc58014b1c3d2ec66f19fb\""
Feb 13 18:52:15.487271 systemd[1]: run-containerd-runc-k8s.io-245e15316cfd11dd073361f6adcb871d1ea9782407dc58014b1c3d2ec66f19fb-runc.0b9qcB.mount: Deactivated successfully.
Feb 13 18:52:15.497350 systemd[1]: Started cri-containerd-245e15316cfd11dd073361f6adcb871d1ea9782407dc58014b1c3d2ec66f19fb.scope - libcontainer container 245e15316cfd11dd073361f6adcb871d1ea9782407dc58014b1c3d2ec66f19fb.
Feb 13 18:52:15.555883 containerd[1948]: time="2025-02-13T18:52:15.555694426Z" level=info msg="StartContainer for \"245e15316cfd11dd073361f6adcb871d1ea9782407dc58014b1c3d2ec66f19fb\" returns successfully"
Feb 13 18:52:15.765784 kubelet[2435]: E0213 18:52:15.765715    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:15.871365 kubelet[2435]: E0213 18:52:15.870711    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:15.934853 kubelet[2435]: I0213 18:52:15.934723    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fqc92" podStartSLOduration=3.629420516 podStartE2EDuration="6.934705212s" podCreationTimestamp="2025-02-13 18:52:09 +0000 UTC" firstStartedPulling="2025-02-13 18:52:12.102172277 +0000 UTC m=+4.841721769" lastFinishedPulling="2025-02-13 18:52:15.407456961 +0000 UTC m=+8.147006465" observedRunningTime="2025-02-13 18:52:15.934303548 +0000 UTC m=+8.673853088" watchObservedRunningTime="2025-02-13 18:52:15.934705212 +0000 UTC m=+8.674254716"
Feb 13 18:52:16.766907 kubelet[2435]: E0213 18:52:16.766817    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:17.767264 kubelet[2435]: E0213 18:52:17.767194    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:17.870787 kubelet[2435]: E0213 18:52:17.870733    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:18.768168 kubelet[2435]: E0213 18:52:18.768012    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:18.912070 containerd[1948]: time="2025-02-13T18:52:18.911872478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:18.913542 containerd[1948]: time="2025-02-13T18:52:18.913471346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123"
Feb 13 18:52:18.914580 containerd[1948]: time="2025-02-13T18:52:18.914480462Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:18.919221 containerd[1948]: time="2025-02-13T18:52:18.919124078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:18.921061 containerd[1948]: time="2025-02-13T18:52:18.920853975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.511825926s"
Feb 13 18:52:18.921061 containerd[1948]: time="2025-02-13T18:52:18.920912895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\""
Feb 13 18:52:18.927064 containerd[1948]: time="2025-02-13T18:52:18.926880219Z" level=info msg="CreateContainer within sandbox \"3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Feb 13 18:52:18.947115 containerd[1948]: time="2025-02-13T18:52:18.946907775Z" level=info msg="CreateContainer within sandbox \"3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c\""
Feb 13 18:52:18.948736 containerd[1948]: time="2025-02-13T18:52:18.948034983Z" level=info msg="StartContainer for \"d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c\""
Feb 13 18:52:19.008223 systemd[1]: Started cri-containerd-d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c.scope - libcontainer container d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c.
Feb 13 18:52:19.067670 containerd[1948]: time="2025-02-13T18:52:19.066885071Z" level=info msg="StartContainer for \"d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c\" returns successfully"
Feb 13 18:52:19.768458 kubelet[2435]: E0213 18:52:19.768365    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:19.871724 kubelet[2435]: E0213 18:52:19.871585    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:20.375339 containerd[1948]: time="2025-02-13T18:52:20.375265094Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Feb 13 18:52:20.379804 systemd[1]: cri-containerd-d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c.scope: Deactivated successfully.
Feb 13 18:52:20.392071 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Feb 13 18:52:20.432198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c-rootfs.mount: Deactivated successfully.
Feb 13 18:52:20.441020 kubelet[2435]: I0213 18:52:20.439513    2435 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
Feb 13 18:52:20.769438 kubelet[2435]: E0213 18:52:20.769386    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:21.571353 containerd[1948]: time="2025-02-13T18:52:21.571034848Z" level=info msg="shim disconnected" id=d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c namespace=k8s.io
Feb 13 18:52:21.571353 containerd[1948]: time="2025-02-13T18:52:21.571109092Z" level=warning msg="cleaning up after shim disconnected" id=d81ab1f25a41b0c0481db4ddb0d045da7abe47757b2c6b973d14f8965e06497c namespace=k8s.io
Feb 13 18:52:21.571353 containerd[1948]: time="2025-02-13T18:52:21.571128064Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Feb 13 18:52:21.591499 containerd[1948]: time="2025-02-13T18:52:21.590196988Z" level=warning msg="cleanup warnings time=\"2025-02-13T18:52:21Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io
Feb 13 18:52:21.771138 kubelet[2435]: E0213 18:52:21.771089    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:21.880028 systemd[1]: Created slice kubepods-besteffort-pod32e4b63f_eda9_4cc9_a124_9ffbc6d84e9a.slice - libcontainer container kubepods-besteffort-pod32e4b63f_eda9_4cc9_a124_9ffbc6d84e9a.slice.
Feb 13 18:52:21.884760 containerd[1948]: time="2025-02-13T18:52:21.884680481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:0,}"
Feb 13 18:52:21.937883 containerd[1948]: time="2025-02-13T18:52:21.937560785Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Feb 13 18:52:22.014137 containerd[1948]: time="2025-02-13T18:52:22.014031446Z" level=error msg="Failed to destroy network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:22.017087 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa-shm.mount: Deactivated successfully.
Feb 13 18:52:22.017609 containerd[1948]: time="2025-02-13T18:52:22.017302622Z" level=error msg="encountered an error cleaning up failed sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:22.017609 containerd[1948]: time="2025-02-13T18:52:22.017482022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:22.018976 kubelet[2435]: E0213 18:52:22.018682    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:22.018976 kubelet[2435]: E0213 18:52:22.018908    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:22.018976 kubelet[2435]: E0213 18:52:22.018965    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:22.020074 kubelet[2435]: E0213 18:52:22.019060    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:22.772562 kubelet[2435]: E0213 18:52:22.772498    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:22.937867 kubelet[2435]: I0213 18:52:22.936852    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa"
Feb 13 18:52:22.938256 containerd[1948]: time="2025-02-13T18:52:22.938211330Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:52:22.938912 containerd[1948]: time="2025-02-13T18:52:22.938486742Z" level=info msg="Ensure that sandbox bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa in task-service has been cleanup successfully"
Feb 13 18:52:22.941416 containerd[1948]: time="2025-02-13T18:52:22.941158386Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:52:22.941416 containerd[1948]: time="2025-02-13T18:52:22.941239554Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:52:22.941823 systemd[1]: run-netns-cni\x2d59b928be\x2de59d\x2d45c5\x2d56b7\x2d4d0677b1365c.mount: Deactivated successfully.
Feb 13 18:52:22.944321 containerd[1948]: time="2025-02-13T18:52:22.942803874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:1,}"
Feb 13 18:52:23.058889 containerd[1948]: time="2025-02-13T18:52:23.058542123Z" level=error msg="Failed to destroy network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:23.059304 containerd[1948]: time="2025-02-13T18:52:23.059227371Z" level=error msg="encountered an error cleaning up failed sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:23.059405 containerd[1948]: time="2025-02-13T18:52:23.059346423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:23.061960 kubelet[2435]: E0213 18:52:23.060193    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:23.062893 kubelet[2435]: E0213 18:52:23.062095    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:23.062893 kubelet[2435]: E0213 18:52:23.062317    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:23.062893 kubelet[2435]: E0213 18:52:23.062447    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:23.062507 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146-shm.mount: Deactivated successfully.
Feb 13 18:52:23.115494 kubelet[2435]: I0213 18:52:23.115427    2435 topology_manager.go:215] "Topology Admit Handler" podUID="5206b6f6-6cc0-4889-b67e-8705aab95f76" podNamespace="default" podName="nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:23.125499 systemd[1]: Created slice kubepods-besteffort-pod5206b6f6_6cc0_4889_b67e_8705aab95f76.slice - libcontainer container kubepods-besteffort-pod5206b6f6_6cc0_4889_b67e_8705aab95f76.slice.
Feb 13 18:52:23.155406 kubelet[2435]: I0213 18:52:23.155328    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjjh2\" (UniqueName: \"kubernetes.io/projected/5206b6f6-6cc0-4889-b67e-8705aab95f76-kube-api-access-tjjh2\") pod \"nginx-deployment-85f456d6dd-4lqsw\" (UID: \"5206b6f6-6cc0-4889-b67e-8705aab95f76\") " pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:23.432676 containerd[1948]: time="2025-02-13T18:52:23.432097937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:0,}"
Feb 13 18:52:23.658703 containerd[1948]: time="2025-02-13T18:52:23.658621074Z" level=error msg="Failed to destroy network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:23.659383 containerd[1948]: time="2025-02-13T18:52:23.659320446Z" level=error msg="encountered an error cleaning up failed sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:23.659495 containerd[1948]: time="2025-02-13T18:52:23.659456982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:23.660331 kubelet[2435]: E0213 18:52:23.660040    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:23.660331 kubelet[2435]: E0213 18:52:23.660181    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:23.660331 kubelet[2435]: E0213 18:52:23.660221    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:23.660806 kubelet[2435]: E0213 18:52:23.660304    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4lqsw" podUID="5206b6f6-6cc0-4889-b67e-8705aab95f76"
Feb 13 18:52:23.773612 kubelet[2435]: E0213 18:52:23.773495    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:23.947820 kubelet[2435]: I0213 18:52:23.947769    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146"
Feb 13 18:52:23.949107 containerd[1948]: time="2025-02-13T18:52:23.948964063Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:52:23.949795 containerd[1948]: time="2025-02-13T18:52:23.949238803Z" level=info msg="Ensure that sandbox 322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146 in task-service has been cleanup successfully"
Feb 13 18:52:23.955899 containerd[1948]: time="2025-02-13T18:52:23.951907687Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:52:23.955899 containerd[1948]: time="2025-02-13T18:52:23.951969043Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:52:23.955413 systemd[1]: run-netns-cni\x2d92a052a2\x2db0c3\x2def6a\x2d90a7\x2d0646ea3509bb.mount: Deactivated successfully.
Feb 13 18:52:23.956679 kubelet[2435]: I0213 18:52:23.954317    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee"
Feb 13 18:52:23.960950 containerd[1948]: time="2025-02-13T18:52:23.958232396Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:52:23.960950 containerd[1948]: time="2025-02-13T18:52:23.958557212Z" level=info msg="Ensure that sandbox 32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee in task-service has been cleanup successfully"
Feb 13 18:52:23.960950 containerd[1948]: time="2025-02-13T18:52:23.958905212Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:52:23.960950 containerd[1948]: time="2025-02-13T18:52:23.959228840Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:52:23.960950 containerd[1948]: time="2025-02-13T18:52:23.959720000Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:52:23.960950 containerd[1948]: time="2025-02-13T18:52:23.959884268Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:52:23.960950 containerd[1948]: time="2025-02-13T18:52:23.959906648Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:52:23.962574 systemd[1]: run-netns-cni\x2d1fc13f64\x2d23af\x2d870e\x2d288e\x2d9a29162b0654.mount: Deactivated successfully.
Feb 13 18:52:23.963576 containerd[1948]: time="2025-02-13T18:52:23.963246092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:1,}"
Feb 13 18:52:23.963649 containerd[1948]: time="2025-02-13T18:52:23.963626168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:2,}"
Feb 13 18:52:24.295779 containerd[1948]: time="2025-02-13T18:52:24.295514585Z" level=error msg="Failed to destroy network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:24.299115 containerd[1948]: time="2025-02-13T18:52:24.297784133Z" level=error msg="encountered an error cleaning up failed sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:24.299115 containerd[1948]: time="2025-02-13T18:52:24.298760021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:24.300134 kubelet[2435]: E0213 18:52:24.299744    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:24.300134 kubelet[2435]: E0213 18:52:24.299931    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:24.300134 kubelet[2435]: E0213 18:52:24.299975    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:24.300365 kubelet[2435]: E0213 18:52:24.300069    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:24.313263 containerd[1948]: time="2025-02-13T18:52:24.313003157Z" level=error msg="Failed to destroy network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:24.314355 containerd[1948]: time="2025-02-13T18:52:24.314040581Z" level=error msg="encountered an error cleaning up failed sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:24.314355 containerd[1948]: time="2025-02-13T18:52:24.314180429Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:24.315722 kubelet[2435]: E0213 18:52:24.315029    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:24.315722 kubelet[2435]: E0213 18:52:24.315137    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:24.315722 kubelet[2435]: E0213 18:52:24.315186    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:24.316083 kubelet[2435]: E0213 18:52:24.315272    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4lqsw" podUID="5206b6f6-6cc0-4889-b67e-8705aab95f76"
Feb 13 18:52:24.774146 kubelet[2435]: E0213 18:52:24.774063    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:24.943746 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a-shm.mount: Deactivated successfully.
Feb 13 18:52:24.962253 kubelet[2435]: I0213 18:52:24.962213    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d"
Feb 13 18:52:24.964397 containerd[1948]: time="2025-02-13T18:52:24.963903897Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:52:24.965274 containerd[1948]: time="2025-02-13T18:52:24.965177373Z" level=info msg="Ensure that sandbox 0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d in task-service has been cleanup successfully"
Feb 13 18:52:24.968387 containerd[1948]: time="2025-02-13T18:52:24.965615313Z" level=info msg="TearDown network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" successfully"
Feb 13 18:52:24.968387 containerd[1948]: time="2025-02-13T18:52:24.965654517Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" returns successfully"
Feb 13 18:52:24.972281 containerd[1948]: time="2025-02-13T18:52:24.970146093Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:52:24.972281 containerd[1948]: time="2025-02-13T18:52:24.970314201Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:52:24.972281 containerd[1948]: time="2025-02-13T18:52:24.970335765Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:52:24.971344 systemd[1]: run-netns-cni\x2ddc9c3ffb\x2dbb52\x2d8bee\x2d1da3\x2db0da613cf439.mount: Deactivated successfully.
Feb 13 18:52:24.973495 containerd[1948]: time="2025-02-13T18:52:24.973329261Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:52:24.973617 containerd[1948]: time="2025-02-13T18:52:24.973507833Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:52:24.973617 containerd[1948]: time="2025-02-13T18:52:24.973531509Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:52:24.975194 kubelet[2435]: I0213 18:52:24.974775    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a"
Feb 13 18:52:24.976336 containerd[1948]: time="2025-02-13T18:52:24.976183941Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:52:24.976490 containerd[1948]: time="2025-02-13T18:52:24.976186245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:3,}"
Feb 13 18:52:24.977144 containerd[1948]: time="2025-02-13T18:52:24.977033793Z" level=info msg="Ensure that sandbox 8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a in task-service has been cleanup successfully"
Feb 13 18:52:24.979338 containerd[1948]: time="2025-02-13T18:52:24.977765337Z" level=info msg="TearDown network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" successfully"
Feb 13 18:52:24.979338 containerd[1948]: time="2025-02-13T18:52:24.978105669Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" returns successfully"
Feb 13 18:52:24.981597 containerd[1948]: time="2025-02-13T18:52:24.981340821Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:52:24.982878 containerd[1948]: time="2025-02-13T18:52:24.982543797Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:52:24.982878 containerd[1948]: time="2025-02-13T18:52:24.982592277Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:52:24.983426 systemd[1]: run-netns-cni\x2d54799cb3\x2dd033\x2d751f\x2de49e\x2decff7284eeee.mount: Deactivated successfully.
Feb 13 18:52:24.987401 containerd[1948]: time="2025-02-13T18:52:24.987331917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:2,}"
Feb 13 18:52:25.183733 containerd[1948]: time="2025-02-13T18:52:25.183568230Z" level=error msg="Failed to destroy network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:25.187759 containerd[1948]: time="2025-02-13T18:52:25.187359366Z" level=error msg="encountered an error cleaning up failed sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:25.188944 containerd[1948]: time="2025-02-13T18:52:25.188560962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:25.190302 kubelet[2435]: E0213 18:52:25.189226    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:25.190302 kubelet[2435]: E0213 18:52:25.189307    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:25.190302 kubelet[2435]: E0213 18:52:25.189341    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:25.190665 kubelet[2435]: E0213 18:52:25.189406    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:25.220112 containerd[1948]: time="2025-02-13T18:52:25.219629826Z" level=error msg="Failed to destroy network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:25.221921 containerd[1948]: time="2025-02-13T18:52:25.221740926Z" level=error msg="encountered an error cleaning up failed sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:25.222437 containerd[1948]: time="2025-02-13T18:52:25.222178650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:25.223190 kubelet[2435]: E0213 18:52:25.223130    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:25.223620 kubelet[2435]: E0213 18:52:25.223421    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:25.223620 kubelet[2435]: E0213 18:52:25.223464    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:25.223620 kubelet[2435]: E0213 18:52:25.223544    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4lqsw" podUID="5206b6f6-6cc0-4889-b67e-8705aab95f76"
Feb 13 18:52:25.774640 kubelet[2435]: E0213 18:52:25.774585    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:25.945584 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde-shm.mount: Deactivated successfully.
Feb 13 18:52:25.983715 kubelet[2435]: I0213 18:52:25.983667    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109"
Feb 13 18:52:25.986219 containerd[1948]: time="2025-02-13T18:52:25.986156866Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\""
Feb 13 18:52:25.989092 containerd[1948]: time="2025-02-13T18:52:25.986496286Z" level=info msg="Ensure that sandbox b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109 in task-service has been cleanup successfully"
Feb 13 18:52:25.994674 systemd[1]: run-netns-cni\x2d0fca21bb\x2d4da7\x2da55d\x2dede9\x2db43063e5ec33.mount: Deactivated successfully.
Feb 13 18:52:25.999576 containerd[1948]: time="2025-02-13T18:52:25.999216118Z" level=info msg="TearDown network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" successfully"
Feb 13 18:52:25.999576 containerd[1948]: time="2025-02-13T18:52:25.999444970Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" returns successfully"
Feb 13 18:52:26.001175 containerd[1948]: time="2025-02-13T18:52:26.000902058Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:52:26.001175 containerd[1948]: time="2025-02-13T18:52:26.001092282Z" level=info msg="TearDown network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" successfully"
Feb 13 18:52:26.001175 containerd[1948]: time="2025-02-13T18:52:26.001115946Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" returns successfully"
Feb 13 18:52:26.001671 kubelet[2435]: I0213 18:52:26.001492    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde"
Feb 13 18:52:26.003510 containerd[1948]: time="2025-02-13T18:52:26.003209886Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\""
Feb 13 18:52:26.005273 containerd[1948]: time="2025-02-13T18:52:26.005146254Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:52:26.006403 containerd[1948]: time="2025-02-13T18:52:26.006354906Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:52:26.006403 containerd[1948]: time="2025-02-13T18:52:26.006398310Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:52:26.006589 containerd[1948]: time="2025-02-13T18:52:26.005291646Z" level=info msg="Ensure that sandbox 8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde in task-service has been cleanup successfully"
Feb 13 18:52:26.009614 containerd[1948]: time="2025-02-13T18:52:26.007012554Z" level=info msg="TearDown network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" successfully"
Feb 13 18:52:26.009614 containerd[1948]: time="2025-02-13T18:52:26.009144774Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" returns successfully"
Feb 13 18:52:26.009614 containerd[1948]: time="2025-02-13T18:52:26.009458034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:3,}"
Feb 13 18:52:26.013189 systemd[1]: run-netns-cni\x2d0054e88a\x2da7ce\x2d6dec\x2d6d3b\x2d157e62b774ab.mount: Deactivated successfully.
Feb 13 18:52:26.016801 containerd[1948]: time="2025-02-13T18:52:26.016540890Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:52:26.016801 containerd[1948]: time="2025-02-13T18:52:26.016742730Z" level=info msg="TearDown network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" successfully"
Feb 13 18:52:26.016801 containerd[1948]: time="2025-02-13T18:52:26.016768218Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" returns successfully"
Feb 13 18:52:26.019122 containerd[1948]: time="2025-02-13T18:52:26.018877254Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:52:26.019953 containerd[1948]: time="2025-02-13T18:52:26.019740870Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:52:26.020384 containerd[1948]: time="2025-02-13T18:52:26.020249886Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:52:26.021644 containerd[1948]: time="2025-02-13T18:52:26.021550890Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:52:26.021935 containerd[1948]: time="2025-02-13T18:52:26.021737238Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:52:26.021935 containerd[1948]: time="2025-02-13T18:52:26.021768534Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:52:26.026070 containerd[1948]: time="2025-02-13T18:52:26.025909170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:4,}"
Feb 13 18:52:26.271735 containerd[1948]: time="2025-02-13T18:52:26.271448023Z" level=error msg="Failed to destroy network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:26.273781 containerd[1948]: time="2025-02-13T18:52:26.273677371Z" level=error msg="encountered an error cleaning up failed sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:26.274061 containerd[1948]: time="2025-02-13T18:52:26.273808075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:26.274199 kubelet[2435]: E0213 18:52:26.274146    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:26.274272 kubelet[2435]: E0213 18:52:26.274217    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:26.274272 kubelet[2435]: E0213 18:52:26.274250    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:26.274393 kubelet[2435]: E0213 18:52:26.274310    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4lqsw" podUID="5206b6f6-6cc0-4889-b67e-8705aab95f76"
Feb 13 18:52:26.281921 containerd[1948]: time="2025-02-13T18:52:26.281069491Z" level=error msg="Failed to destroy network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:26.282296 containerd[1948]: time="2025-02-13T18:52:26.282241171Z" level=error msg="encountered an error cleaning up failed sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:26.282405 containerd[1948]: time="2025-02-13T18:52:26.282347767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:26.283950 kubelet[2435]: E0213 18:52:26.283439    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:26.283950 kubelet[2435]: E0213 18:52:26.283519    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:26.283950 kubelet[2435]: E0213 18:52:26.283557    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:26.284558 kubelet[2435]: E0213 18:52:26.283629    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:26.775133 kubelet[2435]: E0213 18:52:26.775041    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:26.946570 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77-shm.mount: Deactivated successfully.
Feb 13 18:52:26.946768 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3-shm.mount: Deactivated successfully.
Feb 13 18:52:27.015337 kubelet[2435]: I0213 18:52:27.015250    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77"
Feb 13 18:52:27.022890 containerd[1948]: time="2025-02-13T18:52:27.022720975Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\""
Feb 13 18:52:27.023570 containerd[1948]: time="2025-02-13T18:52:27.023081419Z" level=info msg="Ensure that sandbox bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77 in task-service has been cleanup successfully"
Feb 13 18:52:27.030301 containerd[1948]: time="2025-02-13T18:52:27.028229155Z" level=info msg="TearDown network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" successfully"
Feb 13 18:52:27.030301 containerd[1948]: time="2025-02-13T18:52:27.028481611Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" returns successfully"
Feb 13 18:52:27.029489 systemd[1]: run-netns-cni\x2d31de8bb7\x2d5334\x2d14db\x2d59fc\x2dea6b6fb7f8e8.mount: Deactivated successfully.
Feb 13 18:52:27.034462 containerd[1948]: time="2025-02-13T18:52:27.031902067Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\""
Feb 13 18:52:27.034462 containerd[1948]: time="2025-02-13T18:52:27.032053123Z" level=info msg="TearDown network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" successfully"
Feb 13 18:52:27.034462 containerd[1948]: time="2025-02-13T18:52:27.032080939Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" returns successfully"
Feb 13 18:52:27.034462 containerd[1948]: time="2025-02-13T18:52:27.032770099Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:52:27.034462 containerd[1948]: time="2025-02-13T18:52:27.032937499Z" level=info msg="TearDown network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" successfully"
Feb 13 18:52:27.034462 containerd[1948]: time="2025-02-13T18:52:27.032962123Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" returns successfully"
Feb 13 18:52:27.034807 kubelet[2435]: I0213 18:52:27.033442    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3"
Feb 13 18:52:27.035164 containerd[1948]: time="2025-02-13T18:52:27.034777519Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\""
Feb 13 18:52:27.035164 containerd[1948]: time="2025-02-13T18:52:27.035049751Z" level=info msg="Ensure that sandbox e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3 in task-service has been cleanup successfully"
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.037976419Z" level=info msg="TearDown network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" successfully"
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.038031967Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" returns successfully"
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.038224459Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.038387743Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.038408875Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.040146535Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\""
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.040322539Z" level=info msg="TearDown network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" successfully"
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.040344871Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" returns successfully"
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.040453495Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.040573207Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:52:27.040805 containerd[1948]: time="2025-02-13T18:52:27.040599499Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:52:27.044104 containerd[1948]: time="2025-02-13T18:52:27.042260323Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:52:27.044104 containerd[1948]: time="2025-02-13T18:52:27.042425359Z" level=info msg="TearDown network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" successfully"
Feb 13 18:52:27.044104 containerd[1948]: time="2025-02-13T18:52:27.042449635Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" returns successfully"
Feb 13 18:52:27.044104 containerd[1948]: time="2025-02-13T18:52:27.042765631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:5,}"
Feb 13 18:52:27.042586 systemd[1]: run-netns-cni\x2d5b505dcb\x2da6f2\x2dc9d5\x2d110d\x2d08a7260f1d3e.mount: Deactivated successfully.
Feb 13 18:52:27.047892 containerd[1948]: time="2025-02-13T18:52:27.047574991Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:52:27.047892 containerd[1948]: time="2025-02-13T18:52:27.047766403Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:52:27.047892 containerd[1948]: time="2025-02-13T18:52:27.047787823Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:52:27.051758 containerd[1948]: time="2025-02-13T18:52:27.051069835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:4,}"
Feb 13 18:52:27.310288 containerd[1948]: time="2025-02-13T18:52:27.310026296Z" level=error msg="Failed to destroy network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:27.311885 containerd[1948]: time="2025-02-13T18:52:27.311652704Z" level=error msg="encountered an error cleaning up failed sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:27.311885 containerd[1948]: time="2025-02-13T18:52:27.311754932Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:27.312519 kubelet[2435]: E0213 18:52:27.312320    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:27.312519 kubelet[2435]: E0213 18:52:27.312402    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:27.313044 kubelet[2435]: E0213 18:52:27.312437    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:27.313501 kubelet[2435]: E0213 18:52:27.313266    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4lqsw" podUID="5206b6f6-6cc0-4889-b67e-8705aab95f76"
Feb 13 18:52:27.324860 containerd[1948]: time="2025-02-13T18:52:27.324180392Z" level=error msg="Failed to destroy network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:27.325756 containerd[1948]: time="2025-02-13T18:52:27.325632380Z" level=error msg="encountered an error cleaning up failed sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:27.325756 containerd[1948]: time="2025-02-13T18:52:27.325739600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:27.326102 kubelet[2435]: E0213 18:52:27.326065    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:27.326179 kubelet[2435]: E0213 18:52:27.326136    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:27.326233 kubelet[2435]: E0213 18:52:27.326174    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:27.326292 kubelet[2435]: E0213 18:52:27.326237    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:27.776252 kubelet[2435]: E0213 18:52:27.776038    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:27.943408 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925-shm.mount: Deactivated successfully.
Feb 13 18:52:28.043035 kubelet[2435]: I0213 18:52:28.041947    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925"
Feb 13 18:52:28.044008 containerd[1948]: time="2025-02-13T18:52:28.043605164Z" level=info msg="StopPodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\""
Feb 13 18:52:28.045260 containerd[1948]: time="2025-02-13T18:52:28.044946716Z" level=info msg="Ensure that sandbox c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925 in task-service has been cleanup successfully"
Feb 13 18:52:28.049780 containerd[1948]: time="2025-02-13T18:52:28.049155296Z" level=info msg="TearDown network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" successfully"
Feb 13 18:52:28.049780 containerd[1948]: time="2025-02-13T18:52:28.049202420Z" level=info msg="StopPodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" returns successfully"
Feb 13 18:52:28.051213 systemd[1]: run-netns-cni\x2d9779008c\x2d54b2\x2d92f9\x2de4ca\x2d24b1d2edb240.mount: Deactivated successfully.
Feb 13 18:52:28.052499 containerd[1948]: time="2025-02-13T18:52:28.051231368Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\""
Feb 13 18:52:28.052499 containerd[1948]: time="2025-02-13T18:52:28.051428960Z" level=info msg="TearDown network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" successfully"
Feb 13 18:52:28.052499 containerd[1948]: time="2025-02-13T18:52:28.051453716Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" returns successfully"
Feb 13 18:52:28.057567 containerd[1948]: time="2025-02-13T18:52:28.056926616Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\""
Feb 13 18:52:28.057567 containerd[1948]: time="2025-02-13T18:52:28.057123488Z" level=info msg="TearDown network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" successfully"
Feb 13 18:52:28.057567 containerd[1948]: time="2025-02-13T18:52:28.057145640Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" returns successfully"
Feb 13 18:52:28.058916 containerd[1948]: time="2025-02-13T18:52:28.057618548Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:52:28.058916 containerd[1948]: time="2025-02-13T18:52:28.057763820Z" level=info msg="TearDown network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" successfully"
Feb 13 18:52:28.058916 containerd[1948]: time="2025-02-13T18:52:28.057786020Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" returns successfully"
Feb 13 18:52:28.060097 containerd[1948]: time="2025-02-13T18:52:28.059758580Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:52:28.060097 containerd[1948]: time="2025-02-13T18:52:28.059939912Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:52:28.060097 containerd[1948]: time="2025-02-13T18:52:28.059962796Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:52:28.062205 kubelet[2435]: I0213 18:52:28.061333    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3"
Feb 13 18:52:28.062384 containerd[1948]: time="2025-02-13T18:52:28.061519364Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:52:28.062920 containerd[1948]: time="2025-02-13T18:52:28.062644424Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:52:28.062920 containerd[1948]: time="2025-02-13T18:52:28.062714840Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:52:28.064797 containerd[1948]: time="2025-02-13T18:52:28.064625384Z" level=info msg="StopPodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\""
Feb 13 18:52:28.065593 containerd[1948]: time="2025-02-13T18:52:28.064653596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:6,}"
Feb 13 18:52:28.065593 containerd[1948]: time="2025-02-13T18:52:28.065090024Z" level=info msg="Ensure that sandbox ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3 in task-service has been cleanup successfully"
Feb 13 18:52:28.068374 containerd[1948]: time="2025-02-13T18:52:28.068312444Z" level=info msg="TearDown network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" successfully"
Feb 13 18:52:28.068374 containerd[1948]: time="2025-02-13T18:52:28.068361524Z" level=info msg="StopPodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" returns successfully"
Feb 13 18:52:28.071071 containerd[1948]: time="2025-02-13T18:52:28.070105916Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\""
Feb 13 18:52:28.071071 containerd[1948]: time="2025-02-13T18:52:28.070261892Z" level=info msg="TearDown network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" successfully"
Feb 13 18:52:28.071071 containerd[1948]: time="2025-02-13T18:52:28.070283396Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" returns successfully"
Feb 13 18:52:28.070188 systemd[1]: run-netns-cni\x2dde64da41\x2dae5a\x2dda75\x2d8220\x2d7ccc23d80be1.mount: Deactivated successfully.
Feb 13 18:52:28.072615 containerd[1948]: time="2025-02-13T18:52:28.072572780Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\""
Feb 13 18:52:28.072959 containerd[1948]: time="2025-02-13T18:52:28.072927500Z" level=info msg="TearDown network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" successfully"
Feb 13 18:52:28.073749 containerd[1948]: time="2025-02-13T18:52:28.073604540Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" returns successfully"
Feb 13 18:52:28.075274 containerd[1948]: time="2025-02-13T18:52:28.074954816Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:52:28.075274 containerd[1948]: time="2025-02-13T18:52:28.075124016Z" level=info msg="TearDown network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" successfully"
Feb 13 18:52:28.075274 containerd[1948]: time="2025-02-13T18:52:28.075146408Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" returns successfully"
Feb 13 18:52:28.077176 containerd[1948]: time="2025-02-13T18:52:28.077130536Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:52:28.077516 containerd[1948]: time="2025-02-13T18:52:28.077470484Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:52:28.078495 containerd[1948]: time="2025-02-13T18:52:28.078456980Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:52:28.079922 containerd[1948]: time="2025-02-13T18:52:28.079450844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:5,}"
Feb 13 18:52:28.289438 containerd[1948]: time="2025-02-13T18:52:28.289337901Z" level=error msg="Failed to destroy network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:28.290063 containerd[1948]: time="2025-02-13T18:52:28.290003037Z" level=error msg="encountered an error cleaning up failed sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:28.290162 containerd[1948]: time="2025-02-13T18:52:28.290114661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:28.290704 kubelet[2435]: E0213 18:52:28.290395    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:28.290704 kubelet[2435]: E0213 18:52:28.290482    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:28.290704 kubelet[2435]: E0213 18:52:28.290516    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:28.291199 kubelet[2435]: E0213 18:52:28.290575    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:28.327655 containerd[1948]: time="2025-02-13T18:52:28.326047197Z" level=error msg="Failed to destroy network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:28.327655 containerd[1948]: time="2025-02-13T18:52:28.326573709Z" level=error msg="encountered an error cleaning up failed sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:28.327655 containerd[1948]: time="2025-02-13T18:52:28.326659017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:28.328216 kubelet[2435]: E0213 18:52:28.326964    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:28.328216 kubelet[2435]: E0213 18:52:28.327034    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:28.328216 kubelet[2435]: E0213 18:52:28.327066    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:28.328413 kubelet[2435]: E0213 18:52:28.327132    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4lqsw" podUID="5206b6f6-6cc0-4889-b67e-8705aab95f76"
Feb 13 18:52:28.727397 kubelet[2435]: E0213 18:52:28.727114    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:28.776642 kubelet[2435]: E0213 18:52:28.776579    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:28.944404 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019-shm.mount: Deactivated successfully.
Feb 13 18:52:28.994647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170782081.mount: Deactivated successfully.
Feb 13 18:52:29.070723 kubelet[2435]: I0213 18:52:29.070676    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019"
Feb 13 18:52:29.073527 containerd[1948]: time="2025-02-13T18:52:29.073016169Z" level=info msg="StopPodSandbox for \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\""
Feb 13 18:52:29.073527 containerd[1948]: time="2025-02-13T18:52:29.073285341Z" level=info msg="Ensure that sandbox 488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019 in task-service has been cleanup successfully"
Feb 13 18:52:29.076045 systemd[1]: run-netns-cni\x2dab19912f\x2da294\x2d35c1\x2d80a7\x2d5dd34b27adaa.mount: Deactivated successfully.
Feb 13 18:52:29.076871 containerd[1948]: time="2025-02-13T18:52:29.076207845Z" level=info msg="TearDown network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\" successfully"
Feb 13 18:52:29.076871 containerd[1948]: time="2025-02-13T18:52:29.076253205Z" level=info msg="StopPodSandbox for \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\" returns successfully"
Feb 13 18:52:29.079113 containerd[1948]: time="2025-02-13T18:52:29.078415353Z" level=info msg="StopPodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\""
Feb 13 18:52:29.079113 containerd[1948]: time="2025-02-13T18:52:29.078608577Z" level=info msg="TearDown network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" successfully"
Feb 13 18:52:29.079113 containerd[1948]: time="2025-02-13T18:52:29.078633609Z" level=info msg="StopPodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" returns successfully"
Feb 13 18:52:29.079651 containerd[1948]: time="2025-02-13T18:52:29.079605441Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\""
Feb 13 18:52:29.080229 containerd[1948]: time="2025-02-13T18:52:29.080192253Z" level=info msg="TearDown network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" successfully"
Feb 13 18:52:29.080383 containerd[1948]: time="2025-02-13T18:52:29.080356413Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" returns successfully"
Feb 13 18:52:29.082454 containerd[1948]: time="2025-02-13T18:52:29.081881661Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\""
Feb 13 18:52:29.082454 containerd[1948]: time="2025-02-13T18:52:29.082025325Z" level=info msg="TearDown network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" successfully"
Feb 13 18:52:29.082454 containerd[1948]: time="2025-02-13T18:52:29.082050573Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" returns successfully"
Feb 13 18:52:29.083564 containerd[1948]: time="2025-02-13T18:52:29.083504505Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:52:29.084116 containerd[1948]: time="2025-02-13T18:52:29.083726913Z" level=info msg="TearDown network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" successfully"
Feb 13 18:52:29.084204 containerd[1948]: time="2025-02-13T18:52:29.084110181Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" returns successfully"
Feb 13 18:52:29.084931 containerd[1948]: time="2025-02-13T18:52:29.084712401Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:52:29.084931 containerd[1948]: time="2025-02-13T18:52:29.084898197Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:52:29.085276 containerd[1948]: time="2025-02-13T18:52:29.084934125Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:52:29.085912 containerd[1948]: time="2025-02-13T18:52:29.085594737Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:52:29.085912 containerd[1948]: time="2025-02-13T18:52:29.085758633Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:52:29.085912 containerd[1948]: time="2025-02-13T18:52:29.085781829Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:52:29.086422 kubelet[2435]: I0213 18:52:29.086385    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd"
Feb 13 18:52:29.087876 containerd[1948]: time="2025-02-13T18:52:29.087502017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:7,}"
Feb 13 18:52:29.088320 containerd[1948]: time="2025-02-13T18:52:29.088273449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:29.089038 containerd[1948]: time="2025-02-13T18:52:29.088977141Z" level=info msg="StopPodSandbox for \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\""
Feb 13 18:52:29.089293 containerd[1948]: time="2025-02-13T18:52:29.089248113Z" level=info msg="Ensure that sandbox 631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd in task-service has been cleanup successfully"
Feb 13 18:52:29.091599 containerd[1948]: time="2025-02-13T18:52:29.089588973Z" level=info msg="TearDown network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\" successfully"
Feb 13 18:52:29.091599 containerd[1948]: time="2025-02-13T18:52:29.089626581Z" level=info msg="StopPodSandbox for \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\" returns successfully"
Feb 13 18:52:29.091761 containerd[1948]: time="2025-02-13T18:52:29.091590033Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762"
Feb 13 18:52:29.092788 containerd[1948]: time="2025-02-13T18:52:29.092699385Z" level=info msg="StopPodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\""
Feb 13 18:52:29.093810 containerd[1948]: time="2025-02-13T18:52:29.093412101Z" level=info msg="TearDown network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" successfully"
Feb 13 18:52:29.093810 containerd[1948]: time="2025-02-13T18:52:29.093458925Z" level=info msg="StopPodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" returns successfully"
Feb 13 18:52:29.094558 systemd[1]: run-netns-cni\x2dd5d18b23\x2db089\x2db5c9\x2db8df\x2d3dbd12307fa7.mount: Deactivated successfully.
Feb 13 18:52:29.097504 containerd[1948]: time="2025-02-13T18:52:29.097058589Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\""
Feb 13 18:52:29.097504 containerd[1948]: time="2025-02-13T18:52:29.097229217Z" level=info msg="TearDown network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" successfully"
Feb 13 18:52:29.097504 containerd[1948]: time="2025-02-13T18:52:29.097258773Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" returns successfully"
Feb 13 18:52:29.098866 containerd[1948]: time="2025-02-13T18:52:29.097962513Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:29.098866 containerd[1948]: time="2025-02-13T18:52:29.098482689Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\""
Feb 13 18:52:29.099013 containerd[1948]: time="2025-02-13T18:52:29.098953305Z" level=info msg="TearDown network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" successfully"
Feb 13 18:52:29.099092 containerd[1948]: time="2025-02-13T18:52:29.099005121Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" returns successfully"
Feb 13 18:52:29.100228 containerd[1948]: time="2025-02-13T18:52:29.100161045Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:52:29.100468 containerd[1948]: time="2025-02-13T18:52:29.100326813Z" level=info msg="TearDown network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" successfully"
Feb 13 18:52:29.100468 containerd[1948]: time="2025-02-13T18:52:29.100349061Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" returns successfully"
Feb 13 18:52:29.101110 containerd[1948]: time="2025-02-13T18:52:29.101054109Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:52:29.101232 containerd[1948]: time="2025-02-13T18:52:29.101214393Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:52:29.101285 containerd[1948]: time="2025-02-13T18:52:29.101237037Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:52:29.102208 containerd[1948]: time="2025-02-13T18:52:29.102109569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:6,}"
Feb 13 18:52:29.109781 containerd[1948]: time="2025-02-13T18:52:29.108615141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:29.109781 containerd[1948]: time="2025-02-13T18:52:29.109575921Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 7.171921044s"
Feb 13 18:52:29.109781 containerd[1948]: time="2025-02-13T18:52:29.109623981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\""
Feb 13 18:52:29.138258 containerd[1948]: time="2025-02-13T18:52:29.138178305Z" level=info msg="CreateContainer within sandbox \"3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Feb 13 18:52:29.183504 containerd[1948]: time="2025-02-13T18:52:29.183387813Z" level=info msg="CreateContainer within sandbox \"3f438552059978b9bd57174854802172858bf7f9f2361e1d8733b083b7906cf2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cc32a18f3db131429d5bacdaae6e0b43ac704e8271c5a200441529edb83373ae\""
Feb 13 18:52:29.184636 containerd[1948]: time="2025-02-13T18:52:29.184478721Z" level=info msg="StartContainer for \"cc32a18f3db131429d5bacdaae6e0b43ac704e8271c5a200441529edb83373ae\""
Feb 13 18:52:29.272284 systemd[1]: Started cri-containerd-cc32a18f3db131429d5bacdaae6e0b43ac704e8271c5a200441529edb83373ae.scope - libcontainer container cc32a18f3db131429d5bacdaae6e0b43ac704e8271c5a200441529edb83373ae.
Feb 13 18:52:29.338032 containerd[1948]: time="2025-02-13T18:52:29.337791862Z" level=error msg="Failed to destroy network for sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:29.338684 containerd[1948]: time="2025-02-13T18:52:29.338565766Z" level=error msg="encountered an error cleaning up failed sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:29.339543 containerd[1948]: time="2025-02-13T18:52:29.339208486Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:29.340530 kubelet[2435]: E0213 18:52:29.340092    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:29.340530 kubelet[2435]: E0213 18:52:29.340194    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:29.340530 kubelet[2435]: E0213 18:52:29.340261    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5lvxl"
Feb 13 18:52:29.340753 kubelet[2435]: E0213 18:52:29.340356    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5lvxl_calico-system(32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5lvxl" podUID="32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a"
Feb 13 18:52:29.348788 containerd[1948]: time="2025-02-13T18:52:29.348609850Z" level=error msg="Failed to destroy network for sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:29.350349 containerd[1948]: time="2025-02-13T18:52:29.350080510Z" level=error msg="encountered an error cleaning up failed sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:29.351072 containerd[1948]: time="2025-02-13T18:52:29.350720854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:29.352359 kubelet[2435]: E0213 18:52:29.352181    2435 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Feb 13 18:52:29.352359 kubelet[2435]: E0213 18:52:29.352263    2435 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:29.352359 kubelet[2435]: E0213 18:52:29.352298    2435 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-4lqsw"
Feb 13 18:52:29.353273 kubelet[2435]: E0213 18:52:29.352453    2435 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-4lqsw_default(5206b6f6-6cc0-4889-b67e-8705aab95f76)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-4lqsw" podUID="5206b6f6-6cc0-4889-b67e-8705aab95f76"
Feb 13 18:52:29.373341 containerd[1948]: time="2025-02-13T18:52:29.373148446Z" level=info msg="StartContainer for \"cc32a18f3db131429d5bacdaae6e0b43ac704e8271c5a200441529edb83373ae\" returns successfully"
Feb 13 18:52:29.493171 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Feb 13 18:52:29.493335 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Feb 13 18:52:29.777492 kubelet[2435]: E0213 18:52:29.777419    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:30.106254 kubelet[2435]: I0213 18:52:30.106120    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c"
Feb 13 18:52:30.109510 containerd[1948]: time="2025-02-13T18:52:30.108378874Z" level=info msg="StopPodSandbox for \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\""
Feb 13 18:52:30.109510 containerd[1948]: time="2025-02-13T18:52:30.108653830Z" level=info msg="Ensure that sandbox 87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c in task-service has been cleanup successfully"
Feb 13 18:52:30.111624 containerd[1948]: time="2025-02-13T18:52:30.111570538Z" level=info msg="TearDown network for sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\" successfully"
Feb 13 18:52:30.111624 containerd[1948]: time="2025-02-13T18:52:30.111616486Z" level=info msg="StopPodSandbox for \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\" returns successfully"
Feb 13 18:52:30.113860 containerd[1948]: time="2025-02-13T18:52:30.113099290Z" level=info msg="StopPodSandbox for \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\""
Feb 13 18:52:30.113860 containerd[1948]: time="2025-02-13T18:52:30.113263210Z" level=info msg="TearDown network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\" successfully"
Feb 13 18:52:30.113860 containerd[1948]: time="2025-02-13T18:52:30.113285614Z" level=info msg="StopPodSandbox for \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\" returns successfully"
Feb 13 18:52:30.114419 containerd[1948]: time="2025-02-13T18:52:30.114380326Z" level=info msg="StopPodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\""
Feb 13 18:52:30.114758 containerd[1948]: time="2025-02-13T18:52:30.114727618Z" level=info msg="TearDown network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" successfully"
Feb 13 18:52:30.115128 containerd[1948]: time="2025-02-13T18:52:30.115096426Z" level=info msg="StopPodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" returns successfully"
Feb 13 18:52:30.117595 systemd[1]: run-netns-cni\x2dda0b29fa\x2d6a3e\x2dc7dc\x2d70aa\x2d8cef48dffa07.mount: Deactivated successfully.
Feb 13 18:52:30.119918 containerd[1948]: time="2025-02-13T18:52:30.119793994Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\""
Feb 13 18:52:30.120642 containerd[1948]: time="2025-02-13T18:52:30.120581734Z" level=info msg="TearDown network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" successfully"
Feb 13 18:52:30.121001 containerd[1948]: time="2025-02-13T18:52:30.120861562Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" returns successfully"
Feb 13 18:52:30.121349 kubelet[2435]: I0213 18:52:30.121269    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zn5w7" podStartSLOduration=4.102773862 podStartE2EDuration="21.121247362s" podCreationTimestamp="2025-02-13 18:52:09 +0000 UTC" firstStartedPulling="2025-02-13 18:52:12.094006397 +0000 UTC m=+4.833555901" lastFinishedPulling="2025-02-13 18:52:29.112479897 +0000 UTC m=+21.852029401" observedRunningTime="2025-02-13 18:52:30.116786194 +0000 UTC m=+22.856335722" watchObservedRunningTime="2025-02-13 18:52:30.121247362 +0000 UTC m=+22.860796866"
Feb 13 18:52:30.122929 containerd[1948]: time="2025-02-13T18:52:30.122738830Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\""
Feb 13 18:52:30.123635 containerd[1948]: time="2025-02-13T18:52:30.123578470Z" level=info msg="TearDown network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" successfully"
Feb 13 18:52:30.124885 kubelet[2435]: I0213 18:52:30.123823    2435 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb"
Feb 13 18:52:30.125082 containerd[1948]: time="2025-02-13T18:52:30.125031586Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" returns successfully"
Feb 13 18:52:30.125280 containerd[1948]: time="2025-02-13T18:52:30.125002582Z" level=info msg="StopPodSandbox for \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\""
Feb 13 18:52:30.125971 containerd[1948]: time="2025-02-13T18:52:30.125903602Z" level=info msg="Ensure that sandbox 149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb in task-service has been cleanup successfully"
Feb 13 18:52:30.127154 containerd[1948]: time="2025-02-13T18:52:30.126985402Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:52:30.127559 containerd[1948]: time="2025-02-13T18:52:30.127527922Z" level=info msg="TearDown network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" successfully"
Feb 13 18:52:30.127745 containerd[1948]: time="2025-02-13T18:52:30.127611490Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" returns successfully"
Feb 13 18:52:30.128540 containerd[1948]: time="2025-02-13T18:52:30.128498458Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:52:30.129228 containerd[1948]: time="2025-02-13T18:52:30.129148762Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:52:30.129228 containerd[1948]: time="2025-02-13T18:52:30.129182758Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:52:30.130204 containerd[1948]: time="2025-02-13T18:52:30.130118458Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:52:30.131009 containerd[1948]: time="2025-02-13T18:52:30.130934746Z" level=info msg="TearDown network for sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\" successfully"
Feb 13 18:52:30.131009 containerd[1948]: time="2025-02-13T18:52:30.130984030Z" level=info msg="StopPodSandbox for \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\" returns successfully"
Feb 13 18:52:30.131009 containerd[1948]: time="2025-02-13T18:52:30.130993342Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:52:30.131009 containerd[1948]: time="2025-02-13T18:52:30.131070850Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:52:30.132613 containerd[1948]: time="2025-02-13T18:52:30.132406510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:8,}"
Feb 13 18:52:30.133008 containerd[1948]: time="2025-02-13T18:52:30.132972430Z" level=info msg="StopPodSandbox for \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\""
Feb 13 18:52:30.134216 containerd[1948]: time="2025-02-13T18:52:30.134164234Z" level=info msg="TearDown network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\" successfully"
Feb 13 18:52:30.134216 containerd[1948]: time="2025-02-13T18:52:30.134207698Z" level=info msg="StopPodSandbox for \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\" returns successfully"
Feb 13 18:52:30.135292 systemd[1]: run-netns-cni\x2d9d4154dc\x2d451f\x2d3746\x2d27a8\x2d213ef8f532e9.mount: Deactivated successfully.
Feb 13 18:52:30.137980 containerd[1948]: time="2025-02-13T18:52:30.137723314Z" level=info msg="StopPodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\""
Feb 13 18:52:30.139007 containerd[1948]: time="2025-02-13T18:52:30.138791062Z" level=info msg="TearDown network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" successfully"
Feb 13 18:52:30.139007 containerd[1948]: time="2025-02-13T18:52:30.138952150Z" level=info msg="StopPodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" returns successfully"
Feb 13 18:52:30.141120 containerd[1948]: time="2025-02-13T18:52:30.141029230Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\""
Feb 13 18:52:30.141335 containerd[1948]: time="2025-02-13T18:52:30.141214306Z" level=info msg="TearDown network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" successfully"
Feb 13 18:52:30.141335 containerd[1948]: time="2025-02-13T18:52:30.141237346Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" returns successfully"
Feb 13 18:52:30.143564 containerd[1948]: time="2025-02-13T18:52:30.143204194Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\""
Feb 13 18:52:30.143564 containerd[1948]: time="2025-02-13T18:52:30.143428474Z" level=info msg="TearDown network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" successfully"
Feb 13 18:52:30.143564 containerd[1948]: time="2025-02-13T18:52:30.143452834Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" returns successfully"
Feb 13 18:52:30.144652 containerd[1948]: time="2025-02-13T18:52:30.144596878Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:52:30.145045 containerd[1948]: time="2025-02-13T18:52:30.144768934Z" level=info msg="TearDown network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" successfully"
Feb 13 18:52:30.145045 containerd[1948]: time="2025-02-13T18:52:30.144793090Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" returns successfully"
Feb 13 18:52:30.146224 containerd[1948]: time="2025-02-13T18:52:30.145889002Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:52:30.146224 containerd[1948]: time="2025-02-13T18:52:30.146056138Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:52:30.146224 containerd[1948]: time="2025-02-13T18:52:30.146077174Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:52:30.147275 containerd[1948]: time="2025-02-13T18:52:30.147209602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:7,}"
Feb 13 18:52:30.465312 (udev-worker)[3399]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 18:52:30.468243 systemd-networkd[1860]: calif86520e47c2: Link UP
Feb 13 18:52:30.468766 systemd-networkd[1860]: calif86520e47c2: Gained carrier
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.257 [INFO][3439] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.290 [INFO][3439] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.21.163-k8s-csi--node--driver--5lvxl-eth0 csi-node-driver- calico-system  32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a 938 0 2025-02-13 18:52:09 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  172.31.21.163  csi-node-driver-5lvxl eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] calif86520e47c2  [] []}} ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Namespace="calico-system" Pod="csi-node-driver-5lvxl" WorkloadEndpoint="172.31.21.163-k8s-csi--node--driver--5lvxl-"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.290 [INFO][3439] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Namespace="calico-system" Pod="csi-node-driver-5lvxl" WorkloadEndpoint="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.362 [INFO][3466] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" HandleID="k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Workload="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.379 [INFO][3466] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" HandleID="k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Workload="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011c660), Attrs:map[string]string{"namespace":"calico-system", "node":"172.31.21.163", "pod":"csi-node-driver-5lvxl", "timestamp":"2025-02-13 18:52:30.362933351 +0000 UTC"}, Hostname:"172.31.21.163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.379 [INFO][3466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.379 [INFO][3466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.379 [INFO][3466] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.21.163'
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.382 [INFO][3466] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.391 [INFO][3466] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.408 [INFO][3466] ipam/ipam.go 489: Trying affinity for 192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.412 [INFO][3466] ipam/ipam.go 155: Attempting to load block cidr=192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.415 [INFO][3466] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.415 [INFO][3466] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.418 [INFO][3466] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.428 [INFO][3466] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.441 [INFO][3466] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.14.193/26] block=192.168.14.192/26 handle="k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.441 [INFO][3466] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.193/26] handle="k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" host="172.31.21.163"
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.441 [INFO][3466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 18:52:30.500563 containerd[1948]: 2025-02-13 18:52:30.441 [INFO][3466] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.193/26] IPv6=[] ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" HandleID="k8s-pod-network.187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Workload="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0"
Feb 13 18:52:30.503503 containerd[1948]: 2025-02-13 18:52:30.449 [INFO][3439] cni-plugin/k8s.go 386: Populated endpoint ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Namespace="calico-system" Pod="csi-node-driver-5lvxl" WorkloadEndpoint="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.163-k8s-csi--node--driver--5lvxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.21.163", ContainerID:"", Pod:"csi-node-driver-5lvxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif86520e47c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 18:52:30.503503 containerd[1948]: 2025-02-13 18:52:30.449 [INFO][3439] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.14.193/32] ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Namespace="calico-system" Pod="csi-node-driver-5lvxl" WorkloadEndpoint="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0"
Feb 13 18:52:30.503503 containerd[1948]: 2025-02-13 18:52:30.449 [INFO][3439] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif86520e47c2 ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Namespace="calico-system" Pod="csi-node-driver-5lvxl" WorkloadEndpoint="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0"
Feb 13 18:52:30.503503 containerd[1948]: 2025-02-13 18:52:30.471 [INFO][3439] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Namespace="calico-system" Pod="csi-node-driver-5lvxl" WorkloadEndpoint="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0"
Feb 13 18:52:30.503503 containerd[1948]: 2025-02-13 18:52:30.472 [INFO][3439] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Namespace="calico-system" Pod="csi-node-driver-5lvxl" WorkloadEndpoint="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.163-k8s-csi--node--driver--5lvxl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 9, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.21.163", ContainerID:"187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3", Pod:"csi-node-driver-5lvxl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.14.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif86520e47c2", MAC:"1e:cf:3b:c3:58:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 18:52:30.503503 containerd[1948]: 2025-02-13 18:52:30.497 [INFO][3439] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3" Namespace="calico-system" Pod="csi-node-driver-5lvxl" WorkloadEndpoint="172.31.21.163-k8s-csi--node--driver--5lvxl-eth0"
Feb 13 18:52:30.512994 systemd-networkd[1860]: cali26dc94ea576: Link UP
Feb 13 18:52:30.513405 systemd-networkd[1860]: cali26dc94ea576: Gained carrier
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.287 [INFO][3452] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.307 [INFO][3452] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0 nginx-deployment-85f456d6dd- default  5206b6f6-6cc0-4889-b67e-8705aab95f76 1044 0 2025-02-13 18:52:23 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  172.31.21.163  nginx-deployment-85f456d6dd-4lqsw eth0 default [] []   [kns.default ksa.default.default] cali26dc94ea576  [] []}} ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Namespace="default" Pod="nginx-deployment-85f456d6dd-4lqsw" WorkloadEndpoint="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.308 [INFO][3452] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Namespace="default" Pod="nginx-deployment-85f456d6dd-4lqsw" WorkloadEndpoint="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.372 [INFO][3470] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" HandleID="k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Workload="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.389 [INFO][3470] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" HandleID="k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Workload="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028c700), Attrs:map[string]string{"namespace":"default", "node":"172.31.21.163", "pod":"nginx-deployment-85f456d6dd-4lqsw", "timestamp":"2025-02-13 18:52:30.372066527 +0000 UTC"}, Hostname:"172.31.21.163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.389 [INFO][3470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.441 [INFO][3470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.441 [INFO][3470] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.21.163'
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.444 [INFO][3470] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.452 [INFO][3470] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.460 [INFO][3470] ipam/ipam.go 489: Trying affinity for 192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.467 [INFO][3470] ipam/ipam.go 155: Attempting to load block cidr=192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.473 [INFO][3470] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.474 [INFO][3470] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.478 [INFO][3470] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.488 [INFO][3470] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.502 [INFO][3470] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.14.194/26] block=192.168.14.192/26 handle="k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.502 [INFO][3470] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.194/26] handle="k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" host="172.31.21.163"
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.502 [INFO][3470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 18:52:30.534293 containerd[1948]: 2025-02-13 18:52:30.502 [INFO][3470] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.194/26] IPv6=[] ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" HandleID="k8s-pod-network.b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Workload="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0"
Feb 13 18:52:30.537295 containerd[1948]: 2025-02-13 18:52:30.508 [INFO][3452] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Namespace="default" Pod="nginx-deployment-85f456d6dd-4lqsw" WorkloadEndpoint="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"5206b6f6-6cc0-4889-b67e-8705aab95f76", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.21.163", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-4lqsw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali26dc94ea576", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 18:52:30.537295 containerd[1948]: 2025-02-13 18:52:30.508 [INFO][3452] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.14.194/32] ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Namespace="default" Pod="nginx-deployment-85f456d6dd-4lqsw" WorkloadEndpoint="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0"
Feb 13 18:52:30.537295 containerd[1948]: 2025-02-13 18:52:30.508 [INFO][3452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali26dc94ea576 ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Namespace="default" Pod="nginx-deployment-85f456d6dd-4lqsw" WorkloadEndpoint="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0"
Feb 13 18:52:30.537295 containerd[1948]: 2025-02-13 18:52:30.514 [INFO][3452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Namespace="default" Pod="nginx-deployment-85f456d6dd-4lqsw" WorkloadEndpoint="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0"
Feb 13 18:52:30.537295 containerd[1948]: 2025-02-13 18:52:30.515 [INFO][3452] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Namespace="default" Pod="nginx-deployment-85f456d6dd-4lqsw" WorkloadEndpoint="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"5206b6f6-6cc0-4889-b67e-8705aab95f76", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 23, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.21.163", ContainerID:"b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481", Pod:"nginx-deployment-85f456d6dd-4lqsw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali26dc94ea576", MAC:"4a:53:e5:a2:65:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 18:52:30.537295 containerd[1948]: 2025-02-13 18:52:30.525 [INFO][3452] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481" Namespace="default" Pod="nginx-deployment-85f456d6dd-4lqsw" WorkloadEndpoint="172.31.21.163-k8s-nginx--deployment--85f456d6dd--4lqsw-eth0"
Feb 13 18:52:30.559203 containerd[1948]: time="2025-02-13T18:52:30.558011532Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:52:30.559203 containerd[1948]: time="2025-02-13T18:52:30.558185028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:52:30.559203 containerd[1948]: time="2025-02-13T18:52:30.558248688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:30.559203 containerd[1948]: time="2025-02-13T18:52:30.558640452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:30.577717 containerd[1948]: time="2025-02-13T18:52:30.577460904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:52:30.577717 containerd[1948]: time="2025-02-13T18:52:30.577597164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:52:30.577717 containerd[1948]: time="2025-02-13T18:52:30.577628628Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:30.578081 containerd[1948]: time="2025-02-13T18:52:30.577789044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:30.598558 systemd[1]: Started cri-containerd-187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3.scope - libcontainer container 187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3.
Feb 13 18:52:30.631154 systemd[1]: Started cri-containerd-b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481.scope - libcontainer container b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481.
Feb 13 18:52:30.675930 containerd[1948]: time="2025-02-13T18:52:30.675725209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5lvxl,Uid:32e4b63f-eda9-4cc9-a124-9ffbc6d84e9a,Namespace:calico-system,Attempt:8,} returns sandbox id \"187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3\""
Feb 13 18:52:30.680318 containerd[1948]: time="2025-02-13T18:52:30.680247169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Feb 13 18:52:30.718139 containerd[1948]: time="2025-02-13T18:52:30.718034629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-4lqsw,Uid:5206b6f6-6cc0-4889-b67e-8705aab95f76,Namespace:default,Attempt:7,} returns sandbox id \"b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481\""
Feb 13 18:52:30.778699 kubelet[2435]: E0213 18:52:30.778550    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:31.524873 kernel: bpftool[3724]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Feb 13 18:52:31.779382 kubelet[2435]: E0213 18:52:31.779202    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:31.812288 (udev-worker)[3400]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 18:52:31.813616 systemd-networkd[1860]: vxlan.calico: Link UP
Feb 13 18:52:31.813624 systemd-networkd[1860]: vxlan.calico: Gained carrier
Feb 13 18:52:32.071962 systemd-networkd[1860]: calif86520e47c2: Gained IPv6LL
Feb 13 18:52:32.262283 systemd-networkd[1860]: cali26dc94ea576: Gained IPv6LL
Feb 13 18:52:32.383256 containerd[1948]: time="2025-02-13T18:52:32.382991737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:32.385082 containerd[1948]: time="2025-02-13T18:52:32.384990565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730"
Feb 13 18:52:32.387601 containerd[1948]: time="2025-02-13T18:52:32.387521149Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:32.394336 containerd[1948]: time="2025-02-13T18:52:32.394283401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:32.395867 containerd[1948]: time="2025-02-13T18:52:32.395609557Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.715301128s"
Feb 13 18:52:32.395867 containerd[1948]: time="2025-02-13T18:52:32.395687005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\""
Feb 13 18:52:32.398683 containerd[1948]: time="2025-02-13T18:52:32.398195569Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 18:52:32.401051 containerd[1948]: time="2025-02-13T18:52:32.400797205Z" level=info msg="CreateContainer within sandbox \"187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Feb 13 18:52:32.437347 containerd[1948]: time="2025-02-13T18:52:32.437106614Z" level=info msg="CreateContainer within sandbox \"187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"21f39a347ef18f6d5902f39849fdd09088d563a2d9f807d49cca1b59da19a86d\""
Feb 13 18:52:32.438692 containerd[1948]: time="2025-02-13T18:52:32.438652286Z" level=info msg="StartContainer for \"21f39a347ef18f6d5902f39849fdd09088d563a2d9f807d49cca1b59da19a86d\""
Feb 13 18:52:32.498346 systemd[1]: Started cri-containerd-21f39a347ef18f6d5902f39849fdd09088d563a2d9f807d49cca1b59da19a86d.scope - libcontainer container 21f39a347ef18f6d5902f39849fdd09088d563a2d9f807d49cca1b59da19a86d.
Feb 13 18:52:32.562560 containerd[1948]: time="2025-02-13T18:52:32.562478402Z" level=info msg="StartContainer for \"21f39a347ef18f6d5902f39849fdd09088d563a2d9f807d49cca1b59da19a86d\" returns successfully"
Feb 13 18:52:32.779935 kubelet[2435]: E0213 18:52:32.779845    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:33.285907 systemd-networkd[1860]: vxlan.calico: Gained IPv6LL
Feb 13 18:52:33.780928 kubelet[2435]: E0213 18:52:33.780577    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:34.359209 update_engine[1929]: I20250213 18:52:34.359071  1929 update_attempter.cc:509] Updating boot flags...
Feb 13 18:52:34.544900 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3400)
Feb 13 18:52:34.780949 kubelet[2435]: E0213 18:52:34.780876    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:35.028475 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 39 scanned by (udev-worker) (3850)
Feb 13 18:52:35.573154 ntpd[1921]: Listen normally on 7 vxlan.calico 192.168.14.192:123
Feb 13 18:52:35.574105 ntpd[1921]: 13 Feb 18:52:35 ntpd[1921]: Listen normally on 7 vxlan.calico 192.168.14.192:123
Feb 13 18:52:35.574105 ntpd[1921]: 13 Feb 18:52:35 ntpd[1921]: Listen normally on 8 calif86520e47c2 [fe80::ecee:eeff:feee:eeee%3]:123
Feb 13 18:52:35.574105 ntpd[1921]: 13 Feb 18:52:35 ntpd[1921]: Listen normally on 9 cali26dc94ea576 [fe80::ecee:eeff:feee:eeee%4]:123
Feb 13 18:52:35.574105 ntpd[1921]: 13 Feb 18:52:35 ntpd[1921]: Listen normally on 10 vxlan.calico [fe80::649c:c0ff:fe75:d53%5]:123
Feb 13 18:52:35.573291 ntpd[1921]: Listen normally on 8 calif86520e47c2 [fe80::ecee:eeff:feee:eeee%3]:123
Feb 13 18:52:35.573378 ntpd[1921]: Listen normally on 9 cali26dc94ea576 [fe80::ecee:eeff:feee:eeee%4]:123
Feb 13 18:52:35.573449 ntpd[1921]: Listen normally on 10 vxlan.calico [fe80::649c:c0ff:fe75:d53%5]:123
Feb 13 18:52:35.783203 kubelet[2435]: E0213 18:52:35.783129    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:36.783556 kubelet[2435]: E0213 18:52:36.783502    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:36.857937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2304178387.mount: Deactivated successfully.
Feb 13 18:52:37.785610 kubelet[2435]: E0213 18:52:37.785412    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:38.528704 containerd[1948]: time="2025-02-13T18:52:38.528579248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:38.531152 containerd[1948]: time="2025-02-13T18:52:38.531054152Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69693086"
Feb 13 18:52:38.533412 containerd[1948]: time="2025-02-13T18:52:38.533298116Z" level=info msg="ImageCreate event name:\"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:38.540234 containerd[1948]: time="2025-02-13T18:52:38.540129200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:38.542918 containerd[1948]: time="2025-02-13T18:52:38.542750672Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 6.144487219s"
Feb 13 18:52:38.543544 containerd[1948]: time="2025-02-13T18:52:38.543226160Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\""
Feb 13 18:52:38.546822 containerd[1948]: time="2025-02-13T18:52:38.546177692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Feb 13 18:52:38.549557 containerd[1948]: time="2025-02-13T18:52:38.549422720Z" level=info msg="CreateContainer within sandbox \"b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Feb 13 18:52:38.586113 containerd[1948]: time="2025-02-13T18:52:38.586004576Z" level=info msg="CreateContainer within sandbox \"b8e04943bff125161154abacf3fbac96fe476ebab77918dc218d52a4a62ee481\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f9de7c254a40d16b21874d7cd9004f93641bc7cac68f8d33aa6ed8135f6a4b8c\""
Feb 13 18:52:38.587784 containerd[1948]: time="2025-02-13T18:52:38.587438096Z" level=info msg="StartContainer for \"f9de7c254a40d16b21874d7cd9004f93641bc7cac68f8d33aa6ed8135f6a4b8c\""
Feb 13 18:52:38.648255 systemd[1]: Started cri-containerd-f9de7c254a40d16b21874d7cd9004f93641bc7cac68f8d33aa6ed8135f6a4b8c.scope - libcontainer container f9de7c254a40d16b21874d7cd9004f93641bc7cac68f8d33aa6ed8135f6a4b8c.
Feb 13 18:52:38.706937 containerd[1948]: time="2025-02-13T18:52:38.705558669Z" level=info msg="StartContainer for \"f9de7c254a40d16b21874d7cd9004f93641bc7cac68f8d33aa6ed8135f6a4b8c\" returns successfully"
Feb 13 18:52:38.786034 kubelet[2435]: E0213 18:52:38.785800    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:39.243083 kubelet[2435]: I0213 18:52:39.242947    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-4lqsw" podStartSLOduration=8.418338624 podStartE2EDuration="16.242902807s" podCreationTimestamp="2025-02-13 18:52:23 +0000 UTC" firstStartedPulling="2025-02-13 18:52:30.720856825 +0000 UTC m=+23.460406317" lastFinishedPulling="2025-02-13 18:52:38.545421008 +0000 UTC m=+31.284970500" observedRunningTime="2025-02-13 18:52:39.242520487 +0000 UTC m=+31.982069991" watchObservedRunningTime="2025-02-13 18:52:39.242902807 +0000 UTC m=+31.982452347"
Feb 13 18:52:39.787057 kubelet[2435]: E0213 18:52:39.786993    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:40.111190 containerd[1948]: time="2025-02-13T18:52:40.110968868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:40.113014 containerd[1948]: time="2025-02-13T18:52:40.112871600Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368"
Feb 13 18:52:40.114017 containerd[1948]: time="2025-02-13T18:52:40.113929736Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:40.118642 containerd[1948]: time="2025-02-13T18:52:40.118512248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:40.120941 containerd[1948]: time="2025-02-13T18:52:40.120671852Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.574390948s"
Feb 13 18:52:40.120941 containerd[1948]: time="2025-02-13T18:52:40.120753188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\""
Feb 13 18:52:40.125768 containerd[1948]: time="2025-02-13T18:52:40.125693756Z" level=info msg="CreateContainer within sandbox \"187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Feb 13 18:52:40.155820 containerd[1948]: time="2025-02-13T18:52:40.155614244Z" level=info msg="CreateContainer within sandbox \"187b1ccfb530b49316cca0dca143b46c6155ed9dd80ef79731dae7fe8175f2d3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"bf017a190a1cf287a3c8b77db4fcc08395b320fcd0c5811db2bb0f10547e36d1\""
Feb 13 18:52:40.156940 containerd[1948]: time="2025-02-13T18:52:40.156367016Z" level=info msg="StartContainer for \"bf017a190a1cf287a3c8b77db4fcc08395b320fcd0c5811db2bb0f10547e36d1\""
Feb 13 18:52:40.235087 systemd[1]: Started cri-containerd-bf017a190a1cf287a3c8b77db4fcc08395b320fcd0c5811db2bb0f10547e36d1.scope - libcontainer container bf017a190a1cf287a3c8b77db4fcc08395b320fcd0c5811db2bb0f10547e36d1.
Feb 13 18:52:40.322002 containerd[1948]: time="2025-02-13T18:52:40.321254961Z" level=info msg="StartContainer for \"bf017a190a1cf287a3c8b77db4fcc08395b320fcd0c5811db2bb0f10547e36d1\" returns successfully"
Feb 13 18:52:40.788125 kubelet[2435]: E0213 18:52:40.788055    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:40.895429 kubelet[2435]: I0213 18:52:40.895384    2435 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Feb 13 18:52:40.895429 kubelet[2435]: I0213 18:52:40.895436    2435 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Feb 13 18:52:41.287103 kubelet[2435]: I0213 18:52:41.287018    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5lvxl" podStartSLOduration=22.843975219 podStartE2EDuration="32.28699627s" podCreationTimestamp="2025-02-13 18:52:09 +0000 UTC" firstStartedPulling="2025-02-13 18:52:30.679785217 +0000 UTC m=+23.419334721" lastFinishedPulling="2025-02-13 18:52:40.122806268 +0000 UTC m=+32.862355772" observedRunningTime="2025-02-13 18:52:41.28694329 +0000 UTC m=+34.026492830" watchObservedRunningTime="2025-02-13 18:52:41.28699627 +0000 UTC m=+34.026545810"
Feb 13 18:52:41.789094 kubelet[2435]: E0213 18:52:41.789005    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:42.789307 kubelet[2435]: E0213 18:52:42.789215    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:43.790357 kubelet[2435]: E0213 18:52:43.790286    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:44.790797 kubelet[2435]: E0213 18:52:44.790726    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:44.964331 kubelet[2435]: I0213 18:52:44.964252    2435 topology_manager.go:215] "Topology Admit Handler" podUID="08490471-90ad-4930-9623-cc8d4e8e8695" podNamespace="default" podName="nfs-server-provisioner-0"
Feb 13 18:52:44.978742 systemd[1]: Created slice kubepods-besteffort-pod08490471_90ad_4930_9623_cc8d4e8e8695.slice - libcontainer container kubepods-besteffort-pod08490471_90ad_4930_9623_cc8d4e8e8695.slice.
Feb 13 18:52:45.103210 kubelet[2435]: I0213 18:52:45.102660    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/08490471-90ad-4930-9623-cc8d4e8e8695-data\") pod \"nfs-server-provisioner-0\" (UID: \"08490471-90ad-4930-9623-cc8d4e8e8695\") " pod="default/nfs-server-provisioner-0"
Feb 13 18:52:45.103651 kubelet[2435]: I0213 18:52:45.103536    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l425v\" (UniqueName: \"kubernetes.io/projected/08490471-90ad-4930-9623-cc8d4e8e8695-kube-api-access-l425v\") pod \"nfs-server-provisioner-0\" (UID: \"08490471-90ad-4930-9623-cc8d4e8e8695\") " pod="default/nfs-server-provisioner-0"
Feb 13 18:52:45.287311 containerd[1948]: time="2025-02-13T18:52:45.286737469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:08490471-90ad-4930-9623-cc8d4e8e8695,Namespace:default,Attempt:0,}"
Feb 13 18:52:45.577073 systemd-networkd[1860]: cali60e51b789ff: Link UP
Feb 13 18:52:45.577637 systemd-networkd[1860]: cali60e51b789ff: Gained carrier
Feb 13 18:52:45.583969 (udev-worker)[4199]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.403 [INFO][4180] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.21.163-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default  08490471-90ad-4930-9623-cc8d4e8e8695 1186 0 2025-02-13 18:52:44 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s  172.31.21.163  nfs-server-provisioner-0 eth0 nfs-server-provisioner [] []   [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff  [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.21.163-k8s-nfs--server--provisioner--0-"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.404 [INFO][4180] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.21.163-k8s-nfs--server--provisioner--0-eth0"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.457 [INFO][4191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" HandleID="k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Workload="172.31.21.163-k8s-nfs--server--provisioner--0-eth0"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.494 [INFO][4191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" HandleID="k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Workload="172.31.21.163-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cbf0), Attrs:map[string]string{"namespace":"default", "node":"172.31.21.163", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 18:52:45.457747046 +0000 UTC"}, Hostname:"172.31.21.163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.494 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.494 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.494 [INFO][4191] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.21.163'
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.499 [INFO][4191] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.515 [INFO][4191] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.524 [INFO][4191] ipam/ipam.go 489: Trying affinity for 192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.529 [INFO][4191] ipam/ipam.go 155: Attempting to load block cidr=192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.534 [INFO][4191] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.534 [INFO][4191] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.538 [INFO][4191] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.547 [INFO][4191] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.566 [INFO][4191] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.14.195/26] block=192.168.14.192/26 handle="k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.567 [INFO][4191] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.195/26] handle="k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" host="172.31.21.163"
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.567 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 18:52:45.610766 containerd[1948]: 2025-02-13 18:52:45.567 [INFO][4191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.195/26] IPv6=[] ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" HandleID="k8s-pod-network.e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Workload="172.31.21.163-k8s-nfs--server--provisioner--0-eth0"
Feb 13 18:52:45.612336 containerd[1948]: 2025-02-13 18:52:45.570 [INFO][4180] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.21.163-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.163-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"08490471-90ad-4930-9623-cc8d4e8e8695", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.21.163", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 18:52:45.612336 containerd[1948]: 2025-02-13 18:52:45.570 [INFO][4180] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.14.195/32] ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.21.163-k8s-nfs--server--provisioner--0-eth0"
Feb 13 18:52:45.612336 containerd[1948]: 2025-02-13 18:52:45.570 [INFO][4180] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.21.163-k8s-nfs--server--provisioner--0-eth0"
Feb 13 18:52:45.612336 containerd[1948]: 2025-02-13 18:52:45.578 [INFO][4180] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.21.163-k8s-nfs--server--provisioner--0-eth0"
Feb 13 18:52:45.615057 containerd[1948]: 2025-02-13 18:52:45.579 [INFO][4180] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.21.163-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.163-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"08490471-90ad-4930-9623-cc8d4e8e8695", ResourceVersion:"1186", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 44, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.21.163", ContainerID:"e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.14.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"d2:e7:bf:ae:81:b1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 18:52:45.615057 containerd[1948]: 2025-02-13 18:52:45.604 [INFO][4180] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="172.31.21.163-k8s-nfs--server--provisioner--0-eth0"
Feb 13 18:52:45.662884 containerd[1948]: time="2025-02-13T18:52:45.662117583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:52:45.662884 containerd[1948]: time="2025-02-13T18:52:45.662259099Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:52:45.662884 containerd[1948]: time="2025-02-13T18:52:45.662336451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:45.665124 containerd[1948]: time="2025-02-13T18:52:45.664359375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:52:45.725221 systemd[1]: Started cri-containerd-e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4.scope - libcontainer container e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4.
Feb 13 18:52:45.791614 kubelet[2435]: E0213 18:52:45.791538    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:45.800143 containerd[1948]: time="2025-02-13T18:52:45.800079832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:08490471-90ad-4930-9623-cc8d4e8e8695,Namespace:default,Attempt:0,} returns sandbox id \"e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4\""
Feb 13 18:52:45.805152 containerd[1948]: time="2025-02-13T18:52:45.805056964Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Feb 13 18:52:46.727563 systemd-networkd[1860]: cali60e51b789ff: Gained IPv6LL
Feb 13 18:52:46.792894 kubelet[2435]: E0213 18:52:46.792613    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:47.793167 kubelet[2435]: E0213 18:52:47.793112    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:48.683742 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362856646.mount: Deactivated successfully.
Feb 13 18:52:48.727031 kubelet[2435]: E0213 18:52:48.726922    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:48.794615 kubelet[2435]: E0213 18:52:48.794568    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:49.573463 ntpd[1921]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123
Feb 13 18:52:49.574286 ntpd[1921]: 13 Feb 18:52:49 ntpd[1921]: Listen normally on 11 cali60e51b789ff [fe80::ecee:eeff:feee:eeee%8]:123
Feb 13 18:52:49.796417 kubelet[2435]: E0213 18:52:49.796274    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:50.797073 kubelet[2435]: E0213 18:52:50.796966    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:51.797869 kubelet[2435]: E0213 18:52:51.797796    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:51.827660 containerd[1948]: time="2025-02-13T18:52:51.827153710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:51.829442 containerd[1948]: time="2025-02-13T18:52:51.829308874Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373623"
Feb 13 18:52:51.830245 containerd[1948]: time="2025-02-13T18:52:51.830147626Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:51.836018 containerd[1948]: time="2025-02-13T18:52:51.835874518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:52:51.838515 containerd[1948]: time="2025-02-13T18:52:51.838299730Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 6.032884878s"
Feb 13 18:52:51.838515 containerd[1948]: time="2025-02-13T18:52:51.838371622Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\""
Feb 13 18:52:51.844033 containerd[1948]: time="2025-02-13T18:52:51.843809986Z" level=info msg="CreateContainer within sandbox \"e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Feb 13 18:52:51.868856 containerd[1948]: time="2025-02-13T18:52:51.868739458Z" level=info msg="CreateContainer within sandbox \"e16276990e999c715304dd9750923e727dd5ba6668f35fc6c8ac48a69f6e85c4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"826fde418635483355279919eafb6509f93e204c33372264a7e378583da31090\""
Feb 13 18:52:51.869877 containerd[1948]: time="2025-02-13T18:52:51.869731462Z" level=info msg="StartContainer for \"826fde418635483355279919eafb6509f93e204c33372264a7e378583da31090\""
Feb 13 18:52:51.933778 systemd[1]: run-containerd-runc-k8s.io-826fde418635483355279919eafb6509f93e204c33372264a7e378583da31090-runc.ze6c5e.mount: Deactivated successfully.
Feb 13 18:52:51.945227 systemd[1]: Started cri-containerd-826fde418635483355279919eafb6509f93e204c33372264a7e378583da31090.scope - libcontainer container 826fde418635483355279919eafb6509f93e204c33372264a7e378583da31090.
Feb 13 18:52:51.999085 containerd[1948]: time="2025-02-13T18:52:51.998995007Z" level=info msg="StartContainer for \"826fde418635483355279919eafb6509f93e204c33372264a7e378583da31090\" returns successfully"
Feb 13 18:52:52.331888 kubelet[2435]: I0213 18:52:52.328602    2435 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.291974402 podStartE2EDuration="8.328578224s" podCreationTimestamp="2025-02-13 18:52:44 +0000 UTC" firstStartedPulling="2025-02-13 18:52:45.804090652 +0000 UTC m=+38.543640168" lastFinishedPulling="2025-02-13 18:52:51.840694486 +0000 UTC m=+44.580243990" observedRunningTime="2025-02-13 18:52:52.328428524 +0000 UTC m=+45.067978052" watchObservedRunningTime="2025-02-13 18:52:52.328578224 +0000 UTC m=+45.068127728"
Feb 13 18:52:52.799316 kubelet[2435]: E0213 18:52:52.799237    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:53.799903 kubelet[2435]: E0213 18:52:53.799793    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:54.800467 kubelet[2435]: E0213 18:52:54.800384    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:55.801447 kubelet[2435]: E0213 18:52:55.801380    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:56.802114 kubelet[2435]: E0213 18:52:56.802038    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:57.803183 kubelet[2435]: E0213 18:52:57.803089    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:58.804384 kubelet[2435]: E0213 18:52:58.804301    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:52:59.805458 kubelet[2435]: E0213 18:52:59.805381    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:00.807189 kubelet[2435]: E0213 18:53:00.807035    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:01.807592 kubelet[2435]: E0213 18:53:01.807522    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:02.808479 kubelet[2435]: E0213 18:53:02.808358    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:03.808886 kubelet[2435]: E0213 18:53:03.808776    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:04.809410 kubelet[2435]: E0213 18:53:04.809336    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:05.810332 kubelet[2435]: E0213 18:53:05.810251    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:06.811169 kubelet[2435]: E0213 18:53:06.811095    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:07.811899 kubelet[2435]: E0213 18:53:07.811809    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:08.727458 kubelet[2435]: E0213 18:53:08.727389    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:08.767967 containerd[1948]: time="2025-02-13T18:53:08.767845730Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:53:08.768740 containerd[1948]: time="2025-02-13T18:53:08.768031742Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:53:08.768740 containerd[1948]: time="2025-02-13T18:53:08.768055742Z" level=info msg="StopPodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:53:08.769731 containerd[1948]: time="2025-02-13T18:53:08.769687502Z" level=info msg="RemovePodSandbox for \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:53:08.769969 containerd[1948]: time="2025-02-13T18:53:08.769792394Z" level=info msg="Forcibly stopping sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\""
Feb 13 18:53:08.770869 containerd[1948]: time="2025-02-13T18:53:08.770225834Z" level=info msg="TearDown network for sandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" successfully"
Feb 13 18:53:08.778150 containerd[1948]: time="2025-02-13T18:53:08.778053002Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.778344 containerd[1948]: time="2025-02-13T18:53:08.778161398Z" level=info msg="RemovePodSandbox \"bc861172b5d21550081865a595ca2af0c013fca7f8c42f3e45b59a71e76b78aa\" returns successfully"
Feb 13 18:53:08.779388 containerd[1948]: time="2025-02-13T18:53:08.778985678Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:53:08.779388 containerd[1948]: time="2025-02-13T18:53:08.779210342Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:53:08.779388 containerd[1948]: time="2025-02-13T18:53:08.779234402Z" level=info msg="StopPodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:53:08.780245 containerd[1948]: time="2025-02-13T18:53:08.780063530Z" level=info msg="RemovePodSandbox for \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:53:08.780245 containerd[1948]: time="2025-02-13T18:53:08.780134066Z" level=info msg="Forcibly stopping sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\""
Feb 13 18:53:08.780505 containerd[1948]: time="2025-02-13T18:53:08.780295418Z" level=info msg="TearDown network for sandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" successfully"
Feb 13 18:53:08.786244 containerd[1948]: time="2025-02-13T18:53:08.786063374Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.786470 containerd[1948]: time="2025-02-13T18:53:08.786270326Z" level=info msg="RemovePodSandbox \"322dd35104afe95111ae1ba3bdfdf0115e8b862a56b999b203771dea0aa62146\" returns successfully"
Feb 13 18:53:08.787593 containerd[1948]: time="2025-02-13T18:53:08.787298270Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:53:08.787593 containerd[1948]: time="2025-02-13T18:53:08.787469546Z" level=info msg="TearDown network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" successfully"
Feb 13 18:53:08.787593 containerd[1948]: time="2025-02-13T18:53:08.787491734Z" level=info msg="StopPodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" returns successfully"
Feb 13 18:53:08.788371 containerd[1948]: time="2025-02-13T18:53:08.788303258Z" level=info msg="RemovePodSandbox for \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:53:08.788569 containerd[1948]: time="2025-02-13T18:53:08.788378198Z" level=info msg="Forcibly stopping sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\""
Feb 13 18:53:08.788569 containerd[1948]: time="2025-02-13T18:53:08.788544626Z" level=info msg="TearDown network for sandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" successfully"
Feb 13 18:53:08.795418 containerd[1948]: time="2025-02-13T18:53:08.795101930Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.795418 containerd[1948]: time="2025-02-13T18:53:08.795228026Z" level=info msg="RemovePodSandbox \"0cdb8e29a262e3a3eab1adc26297d2301b8e5ad2c49e7bf4e1e39b5e9f12ac3d\" returns successfully"
Feb 13 18:53:08.796531 containerd[1948]: time="2025-02-13T18:53:08.796228658Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\""
Feb 13 18:53:08.796531 containerd[1948]: time="2025-02-13T18:53:08.796396382Z" level=info msg="TearDown network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" successfully"
Feb 13 18:53:08.796531 containerd[1948]: time="2025-02-13T18:53:08.796419722Z" level=info msg="StopPodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" returns successfully"
Feb 13 18:53:08.797166 containerd[1948]: time="2025-02-13T18:53:08.797095346Z" level=info msg="RemovePodSandbox for \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\""
Feb 13 18:53:08.797274 containerd[1948]: time="2025-02-13T18:53:08.797168174Z" level=info msg="Forcibly stopping sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\""
Feb 13 18:53:08.797334 containerd[1948]: time="2025-02-13T18:53:08.797314382Z" level=info msg="TearDown network for sandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" successfully"
Feb 13 18:53:08.803244 containerd[1948]: time="2025-02-13T18:53:08.803133818Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.803244 containerd[1948]: time="2025-02-13T18:53:08.803250254Z" level=info msg="RemovePodSandbox \"8b0a86c76052a5da806a8314fbf5db1cb373122c98e2931eef3e0db116fa8bde\" returns successfully"
Feb 13 18:53:08.804556 containerd[1948]: time="2025-02-13T18:53:08.804153374Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\""
Feb 13 18:53:08.804556 containerd[1948]: time="2025-02-13T18:53:08.804350870Z" level=info msg="TearDown network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" successfully"
Feb 13 18:53:08.804556 containerd[1948]: time="2025-02-13T18:53:08.804375170Z" level=info msg="StopPodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" returns successfully"
Feb 13 18:53:08.805626 containerd[1948]: time="2025-02-13T18:53:08.805228562Z" level=info msg="RemovePodSandbox for \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\""
Feb 13 18:53:08.805626 containerd[1948]: time="2025-02-13T18:53:08.805303910Z" level=info msg="Forcibly stopping sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\""
Feb 13 18:53:08.805626 containerd[1948]: time="2025-02-13T18:53:08.805468706Z" level=info msg="TearDown network for sandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" successfully"
Feb 13 18:53:08.811073 containerd[1948]: time="2025-02-13T18:53:08.811003274Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.811302 containerd[1948]: time="2025-02-13T18:53:08.811087670Z" level=info msg="RemovePodSandbox \"bc4b734b3037a73ded3c541abacadf6f462c8c5520b8ef2e92d68b6112523e77\" returns successfully"
Feb 13 18:53:08.812082 kubelet[2435]: E0213 18:53:08.812002    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:08.812710 containerd[1948]: time="2025-02-13T18:53:08.812156186Z" level=info msg="StopPodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\""
Feb 13 18:53:08.812710 containerd[1948]: time="2025-02-13T18:53:08.812356226Z" level=info msg="TearDown network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" successfully"
Feb 13 18:53:08.812710 containerd[1948]: time="2025-02-13T18:53:08.812380250Z" level=info msg="StopPodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" returns successfully"
Feb 13 18:53:08.813241 containerd[1948]: time="2025-02-13T18:53:08.813182606Z" level=info msg="RemovePodSandbox for \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\""
Feb 13 18:53:08.813320 containerd[1948]: time="2025-02-13T18:53:08.813237746Z" level=info msg="Forcibly stopping sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\""
Feb 13 18:53:08.813408 containerd[1948]: time="2025-02-13T18:53:08.813375830Z" level=info msg="TearDown network for sandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" successfully"
Feb 13 18:53:08.819094 containerd[1948]: time="2025-02-13T18:53:08.818972534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.819255 containerd[1948]: time="2025-02-13T18:53:08.819123758Z" level=info msg="RemovePodSandbox \"c8c3c05e91104a49860bfbac2d02a716413c29f881b6cd4f103adf3df9288925\" returns successfully"
Feb 13 18:53:08.820014 containerd[1948]: time="2025-02-13T18:53:08.819901022Z" level=info msg="StopPodSandbox for \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\""
Feb 13 18:53:08.820177 containerd[1948]: time="2025-02-13T18:53:08.820141694Z" level=info msg="TearDown network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\" successfully"
Feb 13 18:53:08.820287 containerd[1948]: time="2025-02-13T18:53:08.820175630Z" level=info msg="StopPodSandbox for \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\" returns successfully"
Feb 13 18:53:08.821013 containerd[1948]: time="2025-02-13T18:53:08.820968158Z" level=info msg="RemovePodSandbox for \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\""
Feb 13 18:53:08.821215 containerd[1948]: time="2025-02-13T18:53:08.821017034Z" level=info msg="Forcibly stopping sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\""
Feb 13 18:53:08.821215 containerd[1948]: time="2025-02-13T18:53:08.821181878Z" level=info msg="TearDown network for sandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\" successfully"
Feb 13 18:53:08.826797 containerd[1948]: time="2025-02-13T18:53:08.826699598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.826797 containerd[1948]: time="2025-02-13T18:53:08.826784462Z" level=info msg="RemovePodSandbox \"488c9dbd53977a754e7e0f26b89f6c31ae00f5e9243e49de747468528142d019\" returns successfully"
Feb 13 18:53:08.827711 containerd[1948]: time="2025-02-13T18:53:08.827406422Z" level=info msg="StopPodSandbox for \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\""
Feb 13 18:53:08.827711 containerd[1948]: time="2025-02-13T18:53:08.827564210Z" level=info msg="TearDown network for sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\" successfully"
Feb 13 18:53:08.827711 containerd[1948]: time="2025-02-13T18:53:08.827585546Z" level=info msg="StopPodSandbox for \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\" returns successfully"
Feb 13 18:53:08.828617 containerd[1948]: time="2025-02-13T18:53:08.828144674Z" level=info msg="RemovePodSandbox for \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\""
Feb 13 18:53:08.828617 containerd[1948]: time="2025-02-13T18:53:08.828350030Z" level=info msg="Forcibly stopping sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\""
Feb 13 18:53:08.828617 containerd[1948]: time="2025-02-13T18:53:08.828481130Z" level=info msg="TearDown network for sandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\" successfully"
Feb 13 18:53:08.834156 containerd[1948]: time="2025-02-13T18:53:08.834040910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.834337 containerd[1948]: time="2025-02-13T18:53:08.834177338Z" level=info msg="RemovePodSandbox \"87f4f102dfc54d448278746ef84ca4404ac78972dbbdcc27dba94db8cc19970c\" returns successfully"
Feb 13 18:53:08.835204 containerd[1948]: time="2025-02-13T18:53:08.834803354Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:53:08.835204 containerd[1948]: time="2025-02-13T18:53:08.834983186Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:53:08.835204 containerd[1948]: time="2025-02-13T18:53:08.835007846Z" level=info msg="StopPodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:53:08.836017 containerd[1948]: time="2025-02-13T18:53:08.835955738Z" level=info msg="RemovePodSandbox for \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:53:08.836017 containerd[1948]: time="2025-02-13T18:53:08.836012378Z" level=info msg="Forcibly stopping sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\""
Feb 13 18:53:08.836180 containerd[1948]: time="2025-02-13T18:53:08.836148290Z" level=info msg="TearDown network for sandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" successfully"
Feb 13 18:53:08.841569 containerd[1948]: time="2025-02-13T18:53:08.841478162Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.841569 containerd[1948]: time="2025-02-13T18:53:08.841567442Z" level=info msg="RemovePodSandbox \"32af185d42ece37e498a1de9c848ae9e10bd3f1cf48c65e388d7523c4fd6fcee\" returns successfully"
Feb 13 18:53:08.843225 containerd[1948]: time="2025-02-13T18:53:08.842819954Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:53:08.843225 containerd[1948]: time="2025-02-13T18:53:08.843079706Z" level=info msg="TearDown network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" successfully"
Feb 13 18:53:08.843225 containerd[1948]: time="2025-02-13T18:53:08.843103298Z" level=info msg="StopPodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" returns successfully"
Feb 13 18:53:08.844112 containerd[1948]: time="2025-02-13T18:53:08.843999962Z" level=info msg="RemovePodSandbox for \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:53:08.844112 containerd[1948]: time="2025-02-13T18:53:08.844064258Z" level=info msg="Forcibly stopping sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\""
Feb 13 18:53:08.844354 containerd[1948]: time="2025-02-13T18:53:08.844217606Z" level=info msg="TearDown network for sandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" successfully"
Feb 13 18:53:08.849885 containerd[1948]: time="2025-02-13T18:53:08.849786543Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.849885 containerd[1948]: time="2025-02-13T18:53:08.849883683Z" level=info msg="RemovePodSandbox \"8f6280e06118461cd1ecfd80ffda28c3bfa6ef158e268e57d29983e1d9900d0a\" returns successfully"
Feb 13 18:53:08.850976 containerd[1948]: time="2025-02-13T18:53:08.850660647Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\""
Feb 13 18:53:08.850976 containerd[1948]: time="2025-02-13T18:53:08.850821795Z" level=info msg="TearDown network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" successfully"
Feb 13 18:53:08.850976 containerd[1948]: time="2025-02-13T18:53:08.850880259Z" level=info msg="StopPodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" returns successfully"
Feb 13 18:53:08.852011 containerd[1948]: time="2025-02-13T18:53:08.851955135Z" level=info msg="RemovePodSandbox for \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\""
Feb 13 18:53:08.852134 containerd[1948]: time="2025-02-13T18:53:08.852016383Z" level=info msg="Forcibly stopping sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\""
Feb 13 18:53:08.852233 containerd[1948]: time="2025-02-13T18:53:08.852195759Z" level=info msg="TearDown network for sandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" successfully"
Feb 13 18:53:08.857822 containerd[1948]: time="2025-02-13T18:53:08.857740011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.857979 containerd[1948]: time="2025-02-13T18:53:08.857856639Z" level=info msg="RemovePodSandbox \"b754c1aed0174a8de7a8424c7c47055c44429e98fa84e4b4c97aaba818efe109\" returns successfully"
Feb 13 18:53:08.859331 containerd[1948]: time="2025-02-13T18:53:08.858870603Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\""
Feb 13 18:53:08.859331 containerd[1948]: time="2025-02-13T18:53:08.859162791Z" level=info msg="TearDown network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" successfully"
Feb 13 18:53:08.859331 containerd[1948]: time="2025-02-13T18:53:08.859198587Z" level=info msg="StopPodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" returns successfully"
Feb 13 18:53:08.861931 containerd[1948]: time="2025-02-13T18:53:08.860078391Z" level=info msg="RemovePodSandbox for \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\""
Feb 13 18:53:08.861931 containerd[1948]: time="2025-02-13T18:53:08.860124351Z" level=info msg="Forcibly stopping sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\""
Feb 13 18:53:08.861931 containerd[1948]: time="2025-02-13T18:53:08.860269719Z" level=info msg="TearDown network for sandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" successfully"
Feb 13 18:53:08.865979 containerd[1948]: time="2025-02-13T18:53:08.865898355Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.866463 containerd[1948]: time="2025-02-13T18:53:08.866256747Z" level=info msg="RemovePodSandbox \"e32d4f5044927b88a216ad26c029878a1d33b5c6bcda0bf79656794608e62df3\" returns successfully"
Feb 13 18:53:08.867325 containerd[1948]: time="2025-02-13T18:53:08.867283899Z" level=info msg="StopPodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\""
Feb 13 18:53:08.868008 containerd[1948]: time="2025-02-13T18:53:08.867700575Z" level=info msg="TearDown network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" successfully"
Feb 13 18:53:08.868008 containerd[1948]: time="2025-02-13T18:53:08.867731511Z" level=info msg="StopPodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" returns successfully"
Feb 13 18:53:08.868939 containerd[1948]: time="2025-02-13T18:53:08.868864599Z" level=info msg="RemovePodSandbox for \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\""
Feb 13 18:53:08.869050 containerd[1948]: time="2025-02-13T18:53:08.868939551Z" level=info msg="Forcibly stopping sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\""
Feb 13 18:53:08.869198 containerd[1948]: time="2025-02-13T18:53:08.869116431Z" level=info msg="TearDown network for sandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" successfully"
Feb 13 18:53:08.876002 containerd[1948]: time="2025-02-13T18:53:08.875704251Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.876002 containerd[1948]: time="2025-02-13T18:53:08.875776635Z" level=info msg="RemovePodSandbox \"ca45f0ae59894a3cacb04838bec358487d72aa33ddb36a39b959a92b36e4b3e3\" returns successfully"
Feb 13 18:53:08.876892 containerd[1948]: time="2025-02-13T18:53:08.876684939Z" level=info msg="StopPodSandbox for \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\""
Feb 13 18:53:08.877030 containerd[1948]: time="2025-02-13T18:53:08.876959763Z" level=info msg="TearDown network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\" successfully"
Feb 13 18:53:08.877030 containerd[1948]: time="2025-02-13T18:53:08.876994995Z" level=info msg="StopPodSandbox for \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\" returns successfully"
Feb 13 18:53:08.878277 containerd[1948]: time="2025-02-13T18:53:08.878002911Z" level=info msg="RemovePodSandbox for \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\""
Feb 13 18:53:08.878277 containerd[1948]: time="2025-02-13T18:53:08.878284707Z" level=info msg="Forcibly stopping sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\""
Feb 13 18:53:08.878667 containerd[1948]: time="2025-02-13T18:53:08.878569035Z" level=info msg="TearDown network for sandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\" successfully"
Feb 13 18:53:08.884948 containerd[1948]: time="2025-02-13T18:53:08.884798307Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.885199 containerd[1948]: time="2025-02-13T18:53:08.884968899Z" level=info msg="RemovePodSandbox \"631aab14d63b1f1150c1a50783c9edf3ae62cb7ec620c50ccdd7f15465a490dd\" returns successfully"
Feb 13 18:53:08.886087 containerd[1948]: time="2025-02-13T18:53:08.886029855Z" level=info msg="StopPodSandbox for \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\""
Feb 13 18:53:08.886556 containerd[1948]: time="2025-02-13T18:53:08.886368555Z" level=info msg="TearDown network for sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\" successfully"
Feb 13 18:53:08.886556 containerd[1948]: time="2025-02-13T18:53:08.886412715Z" level=info msg="StopPodSandbox for \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\" returns successfully"
Feb 13 18:53:08.887880 containerd[1948]: time="2025-02-13T18:53:08.887788611Z" level=info msg="RemovePodSandbox for \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\""
Feb 13 18:53:08.888096 containerd[1948]: time="2025-02-13T18:53:08.887885271Z" level=info msg="Forcibly stopping sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\""
Feb 13 18:53:08.888096 containerd[1948]: time="2025-02-13T18:53:08.888071727Z" level=info msg="TearDown network for sandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\" successfully"
Feb 13 18:53:08.898271 containerd[1948]: time="2025-02-13T18:53:08.898016607Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Feb 13 18:53:08.898271 containerd[1948]: time="2025-02-13T18:53:08.898108731Z" level=info msg="RemovePodSandbox \"149daf4cbeb57fe2ab21c96dbd329b65eeb250a1971264e49db3f28afb765fbb\" returns successfully"
Feb 13 18:53:09.812479 kubelet[2435]: E0213 18:53:09.812408    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:10.812636 kubelet[2435]: E0213 18:53:10.812558    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:11.812780 kubelet[2435]: E0213 18:53:11.812709    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:12.813028 kubelet[2435]: E0213 18:53:12.812953    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:13.814120 kubelet[2435]: E0213 18:53:13.814040    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:14.814595 kubelet[2435]: E0213 18:53:14.814520    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:15.814983 kubelet[2435]: E0213 18:53:15.814895    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:16.680221 kubelet[2435]: I0213 18:53:16.680157    2435 topology_manager.go:215] "Topology Admit Handler" podUID="92ce9158-6391-4cec-b7ad-e2d5c26094ee" podNamespace="default" podName="test-pod-1"
Feb 13 18:53:16.692564 systemd[1]: Created slice kubepods-besteffort-pod92ce9158_6391_4cec_b7ad_e2d5c26094ee.slice - libcontainer container kubepods-besteffort-pod92ce9158_6391_4cec_b7ad_e2d5c26094ee.slice.
Feb 13 18:53:16.801892 kubelet[2435]: I0213 18:53:16.801773    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-3fa9a4b7-e28b-44fb-8b42-d6c73488014a\" (UniqueName: \"kubernetes.io/nfs/92ce9158-6391-4cec-b7ad-e2d5c26094ee-pvc-3fa9a4b7-e28b-44fb-8b42-d6c73488014a\") pod \"test-pod-1\" (UID: \"92ce9158-6391-4cec-b7ad-e2d5c26094ee\") " pod="default/test-pod-1"
Feb 13 18:53:16.802136 kubelet[2435]: I0213 18:53:16.801911    2435 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zdhc\" (UniqueName: \"kubernetes.io/projected/92ce9158-6391-4cec-b7ad-e2d5c26094ee-kube-api-access-6zdhc\") pod \"test-pod-1\" (UID: \"92ce9158-6391-4cec-b7ad-e2d5c26094ee\") " pod="default/test-pod-1"
Feb 13 18:53:16.815538 kubelet[2435]: E0213 18:53:16.815344    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:16.940090 kernel: FS-Cache: Loaded
Feb 13 18:53:16.985755 kernel: RPC: Registered named UNIX socket transport module.
Feb 13 18:53:16.985969 kernel: RPC: Registered udp transport module.
Feb 13 18:53:16.986019 kernel: RPC: Registered tcp transport module.
Feb 13 18:53:16.986059 kernel: RPC: Registered tcp-with-tls transport module.
Feb 13 18:53:16.987016 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Feb 13 18:53:17.308680 kernel: NFS: Registering the id_resolver key type
Feb 13 18:53:17.308910 kernel: Key type id_resolver registered
Feb 13 18:53:17.310243 kernel: Key type id_legacy registered
Feb 13 18:53:17.352478 nfsidmap[4419]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Feb 13 18:53:17.360131 nfsidmap[4420]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal'
Feb 13 18:53:17.599736 containerd[1948]: time="2025-02-13T18:53:17.599266702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:92ce9158-6391-4cec-b7ad-e2d5c26094ee,Namespace:default,Attempt:0,}"
Feb 13 18:53:17.815882 kubelet[2435]: E0213 18:53:17.815766    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:17.827648 (udev-worker)[4408]: Network interface NamePolicy= disabled on kernel command line.
Feb 13 18:53:17.830733 systemd-networkd[1860]: cali5ec59c6bf6e: Link UP
Feb 13 18:53:17.834128 systemd-networkd[1860]: cali5ec59c6bf6e: Gained carrier
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.700 [INFO][4423] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172.31.21.163-k8s-test--pod--1-eth0  default  92ce9158-6391-4cec-b7ad-e2d5c26094ee 1302 0 2025-02-13 18:52:45 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  172.31.21.163  test-pod-1 eth0 default [] []   [kns.default ksa.default.default] cali5ec59c6bf6e  [] []}} ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.21.163-k8s-test--pod--1-"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.701 [INFO][4423] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.21.163-k8s-test--pod--1-eth0"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.754 [INFO][4433] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" HandleID="k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Workload="172.31.21.163-k8s-test--pod--1-eth0"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.776 [INFO][4433] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" HandleID="k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Workload="172.31.21.163-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400040abe0), Attrs:map[string]string{"namespace":"default", "node":"172.31.21.163", "pod":"test-pod-1", "timestamp":"2025-02-13 18:53:17.754287203 +0000 UTC"}, Hostname:"172.31.21.163", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.776 [INFO][4433] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.776 [INFO][4433] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.776 [INFO][4433] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172.31.21.163'
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.779 [INFO][4433] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.785 [INFO][4433] ipam/ipam.go 372: Looking up existing affinities for host host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.792 [INFO][4433] ipam/ipam.go 489: Trying affinity for 192.168.14.192/26 host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.795 [INFO][4433] ipam/ipam.go 155: Attempting to load block cidr=192.168.14.192/26 host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.799 [INFO][4433] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.14.192/26 host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.799 [INFO][4433] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.14.192/26 handle="k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.801 [INFO][4433] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.810 [INFO][4433] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.14.192/26 handle="k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.819 [INFO][4433] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.14.196/26] block=192.168.14.192/26 handle="k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.819 [INFO][4433] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.14.196/26] handle="k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" host="172.31.21.163"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.819 [INFO][4433] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.820 [INFO][4433] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.14.196/26] IPv6=[] ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" HandleID="k8s-pod-network.0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Workload="172.31.21.163-k8s-test--pod--1-eth0"
Feb 13 18:53:17.864668 containerd[1948]: 2025-02-13 18:53:17.823 [INFO][4423] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.21.163-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.163-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"92ce9158-6391-4cec-b7ad-e2d5c26094ee", ResourceVersion:"1302", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.21.163", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 18:53:17.867793 containerd[1948]: 2025-02-13 18:53:17.823 [INFO][4423] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.14.196/32] ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.21.163-k8s-test--pod--1-eth0"
Feb 13 18:53:17.867793 containerd[1948]: 2025-02-13 18:53:17.823 [INFO][4423] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.21.163-k8s-test--pod--1-eth0"
Feb 13 18:53:17.867793 containerd[1948]: 2025-02-13 18:53:17.834 [INFO][4423] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.21.163-k8s-test--pod--1-eth0"
Feb 13 18:53:17.867793 containerd[1948]: 2025-02-13 18:53:17.836 [INFO][4423] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.21.163-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172.31.21.163-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"92ce9158-6391-4cec-b7ad-e2d5c26094ee", ResourceVersion:"1302", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 18, 52, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172.31.21.163", ContainerID:"0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.14.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"f2:87:46:20:f5:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Feb 13 18:53:17.867793 containerd[1948]: 2025-02-13 18:53:17.850 [INFO][4423] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="172.31.21.163-k8s-test--pod--1-eth0"
Feb 13 18:53:17.909915 containerd[1948]: time="2025-02-13T18:53:17.909356832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Feb 13 18:53:17.909915 containerd[1948]: time="2025-02-13T18:53:17.909537708Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Feb 13 18:53:17.909915 containerd[1948]: time="2025-02-13T18:53:17.909596580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:53:17.911176 containerd[1948]: time="2025-02-13T18:53:17.909801600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Feb 13 18:53:17.964193 systemd[1]: Started cri-containerd-0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959.scope - libcontainer container 0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959.
Feb 13 18:53:18.038161 containerd[1948]: time="2025-02-13T18:53:18.038028860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:92ce9158-6391-4cec-b7ad-e2d5c26094ee,Namespace:default,Attempt:0,} returns sandbox id \"0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959\""
Feb 13 18:53:18.042797 containerd[1948]: time="2025-02-13T18:53:18.042728804Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Feb 13 18:53:18.416809 containerd[1948]: time="2025-02-13T18:53:18.416654626Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Feb 13 18:53:18.419583 containerd[1948]: time="2025-02-13T18:53:18.419165926Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Feb 13 18:53:18.427465 containerd[1948]: time="2025-02-13T18:53:18.427398646Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"69692964\" in 384.59693ms"
Feb 13 18:53:18.427865 containerd[1948]: time="2025-02-13T18:53:18.427669126Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:dfbfd726d38a926d7664f4738c165e3d91dd9fc1d33959787a30835bf39a461b\""
Feb 13 18:53:18.432382 containerd[1948]: time="2025-02-13T18:53:18.432312706Z" level=info msg="CreateContainer within sandbox \"0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Feb 13 18:53:18.463413 containerd[1948]: time="2025-02-13T18:53:18.463306978Z" level=info msg="CreateContainer within sandbox \"0f66199c7ffcb07e281122b32655f9c4e0430f6a0842c715724c88e9a97d1959\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ff22e5e9ab9978c780518b26b15c6ac69d9453ab384c09ca092bbb8aa7610468\""
Feb 13 18:53:18.465031 containerd[1948]: time="2025-02-13T18:53:18.464751550Z" level=info msg="StartContainer for \"ff22e5e9ab9978c780518b26b15c6ac69d9453ab384c09ca092bbb8aa7610468\""
Feb 13 18:53:18.523246 systemd[1]: Started cri-containerd-ff22e5e9ab9978c780518b26b15c6ac69d9453ab384c09ca092bbb8aa7610468.scope - libcontainer container ff22e5e9ab9978c780518b26b15c6ac69d9453ab384c09ca092bbb8aa7610468.
Feb 13 18:53:18.595383 containerd[1948]: time="2025-02-13T18:53:18.595300355Z" level=info msg="StartContainer for \"ff22e5e9ab9978c780518b26b15c6ac69d9453ab384c09ca092bbb8aa7610468\" returns successfully"
Feb 13 18:53:18.816060 kubelet[2435]: E0213 18:53:18.815976    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:18.853139 systemd-networkd[1860]: cali5ec59c6bf6e: Gained IPv6LL
Feb 13 18:53:19.817072 kubelet[2435]: E0213 18:53:19.816986    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:20.817507 kubelet[2435]: E0213 18:53:20.817431    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:21.573397 ntpd[1921]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123
Feb 13 18:53:21.574267 ntpd[1921]: 13 Feb 18:53:21 ntpd[1921]: Listen normally on 12 cali5ec59c6bf6e [fe80::ecee:eeff:feee:eeee%9]:123
Feb 13 18:53:21.817777 kubelet[2435]: E0213 18:53:21.817683    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:22.818684 kubelet[2435]: E0213 18:53:22.818549    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:23.818952 kubelet[2435]: E0213 18:53:23.818812    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:24.819427 kubelet[2435]: E0213 18:53:24.819359    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:25.820284 kubelet[2435]: E0213 18:53:25.820210    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:26.821491 kubelet[2435]: E0213 18:53:26.821404    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:27.822546 kubelet[2435]: E0213 18:53:27.822473    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:28.726699 kubelet[2435]: E0213 18:53:28.726627    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:28.822702 kubelet[2435]: E0213 18:53:28.822630    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:29.822894 kubelet[2435]: E0213 18:53:29.822802    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:30.823595 kubelet[2435]: E0213 18:53:30.823536    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:31.825210 kubelet[2435]: E0213 18:53:31.825129    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:32.825776 kubelet[2435]: E0213 18:53:32.825710    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:33.826662 kubelet[2435]: E0213 18:53:33.826597    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:34.827738 kubelet[2435]: E0213 18:53:34.827676    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:35.828160 kubelet[2435]: E0213 18:53:35.828079    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:36.829108 kubelet[2435]: E0213 18:53:36.829027    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:37.830287 kubelet[2435]: E0213 18:53:37.830191    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:38.831581 kubelet[2435]: E0213 18:53:38.831432    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:39.831741 kubelet[2435]: E0213 18:53:39.831650    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:40.831964 kubelet[2435]: E0213 18:53:40.831902    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:41.119733 kubelet[2435]: E0213 18:53:41.119091    2435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 18:53:41.833014 kubelet[2435]: E0213 18:53:41.832931    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:42.833429 kubelet[2435]: E0213 18:53:42.833336    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:43.834508 kubelet[2435]: E0213 18:53:43.834430    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:44.835142 kubelet[2435]: E0213 18:53:44.835077    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:45.835697 kubelet[2435]: E0213 18:53:45.835589    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:46.836206 kubelet[2435]: E0213 18:53:46.836138    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:47.836336 kubelet[2435]: E0213 18:53:47.836265    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:48.726972 kubelet[2435]: E0213 18:53:48.726916    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:48.836614 kubelet[2435]: E0213 18:53:48.836555    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:49.837749 kubelet[2435]: E0213 18:53:49.837684    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:50.838375 kubelet[2435]: E0213 18:53:50.838310    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:51.119948 kubelet[2435]: E0213 18:53:51.119763    2435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 18:53:51.839337 kubelet[2435]: E0213 18:53:51.839262    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:52.839940 kubelet[2435]: E0213 18:53:52.839887    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:53.840513 kubelet[2435]: E0213 18:53:53.840443    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:54.841193 kubelet[2435]: E0213 18:53:54.841119    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:55.841781 kubelet[2435]: E0213 18:53:55.841711    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:56.842087 kubelet[2435]: E0213 18:53:56.842016    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:57.842979 kubelet[2435]: E0213 18:53:57.842887    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:58.843796 kubelet[2435]: E0213 18:53:58.843719    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:53:59.843989 kubelet[2435]: E0213 18:53:59.843919    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:00.844728 kubelet[2435]: E0213 18:54:00.844659    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:01.120517 kubelet[2435]: E0213 18:54:01.120318    2435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"
Feb 13 18:54:01.844939 kubelet[2435]: E0213 18:54:01.844876    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:02.845491 kubelet[2435]: E0213 18:54:02.845411    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:03.846031 kubelet[2435]: E0213 18:54:03.845889    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:04.846524 kubelet[2435]: E0213 18:54:04.846458    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:05.847369 kubelet[2435]: E0213 18:54:05.847255    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:06.176748 kubelet[2435]: E0213 18:54:06.176577    2435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": unexpected EOF"
Feb 13 18:54:06.177468 kubelet[2435]: E0213 18:54:06.177381    2435 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": dial tcp 172.31.27.61:6443: connect: connection refused"
Feb 13 18:54:06.177468 kubelet[2435]: I0213 18:54:06.177450    2435 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease"
Feb 13 18:54:06.179484 kubelet[2435]: E0213 18:54:06.179262    2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": dial tcp 172.31.27.61:6443: connect: connection refused" interval="200ms"
Feb 13 18:54:06.380550 kubelet[2435]: E0213 18:54:06.380461    2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": dial tcp 172.31.27.61:6443: connect: connection refused" interval="400ms"
Feb 13 18:54:06.781569 kubelet[2435]: E0213 18:54:06.781499    2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": dial tcp 172.31.27.61:6443: connect: connection refused" interval="800ms"
Feb 13 18:54:06.848307 kubelet[2435]: E0213 18:54:06.848226    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:07.848488 kubelet[2435]: E0213 18:54:07.848402    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:08.726634 kubelet[2435]: E0213 18:54:08.726557    2435 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:08.848941 kubelet[2435]: E0213 18:54:08.848862    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:09.849192 kubelet[2435]: E0213 18:54:09.849093    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:10.849805 kubelet[2435]: E0213 18:54:10.849696    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:11.850314 kubelet[2435]: E0213 18:54:11.850215    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:12.850864 kubelet[2435]: E0213 18:54:12.850788    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:13.851408 kubelet[2435]: E0213 18:54:13.851325    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:14.852359 kubelet[2435]: E0213 18:54:14.852279    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:15.853335 kubelet[2435]: E0213 18:54:15.853251    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:16.853511 kubelet[2435]: E0213 18:54:16.853429    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:17.582898 kubelet[2435]: E0213 18:54:17.582760    2435 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172.31.21.163?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="1.6s"
Feb 13 18:54:17.854371 kubelet[2435]: E0213 18:54:17.854198    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:18.854784 kubelet[2435]: E0213 18:54:18.854682    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:19.855904 kubelet[2435]: E0213 18:54:19.855796    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:20.856770 kubelet[2435]: E0213 18:54:20.856686    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:21.760884 kubelet[2435]: E0213 18:54:21.760760    2435 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"172.31.21.163\": Get \"https://172.31.27.61:6443/api/v1/nodes/172.31.21.163?resourceVersion=0&timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Feb 13 18:54:21.857494 kubelet[2435]: E0213 18:54:21.857394    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:22.858259 kubelet[2435]: E0213 18:54:22.858178    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:23.859019 kubelet[2435]: E0213 18:54:23.858942    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:24.859642 kubelet[2435]: E0213 18:54:24.859562    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Feb 13 18:54:25.860541 kubelet[2435]: E0213 18:54:25.860462    2435 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"