Jan 13 20:31:45.960112 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Jan 13 20:31:45.960136 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Mon Jan 13 18:57:23 -00 2025
Jan 13 20:31:45.960146 kernel: KASLR enabled
Jan 13 20:31:45.960152 kernel: efi: EFI v2.7 by EDK II
Jan 13 20:31:45.960157 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 
Jan 13 20:31:45.960163 kernel: random: crng init done
Jan 13 20:31:45.960170 kernel: secureboot: Secure boot disabled
Jan 13 20:31:45.960176 kernel: ACPI: Early table checksum verification disabled
Jan 13 20:31:45.960181 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS )
Jan 13 20:31:45.960189 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS  BXPC     00000001      01000013)
Jan 13 20:31:45.960195 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960201 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960207 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960213 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960220 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960228 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960235 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960241 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960247 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 13 20:31:45.960253 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Jan 13 20:31:45.960259 kernel: NUMA: Failed to initialise from firmware
Jan 13 20:31:45.960265 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Jan 13 20:31:45.960271 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff]
Jan 13 20:31:45.960277 kernel: Zone ranges:
Jan 13 20:31:45.960283 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Jan 13 20:31:45.960291 kernel:   DMA32    empty
Jan 13 20:31:45.960297 kernel:   Normal   empty
Jan 13 20:31:45.960303 kernel: Movable zone start for each node
Jan 13 20:31:45.960309 kernel: Early memory node ranges
Jan 13 20:31:45.960315 kernel:   node   0: [mem 0x0000000040000000-0x00000000d976ffff]
Jan 13 20:31:45.960321 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Jan 13 20:31:45.960328 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Jan 13 20:31:45.960334 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Jan 13 20:31:45.960340 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Jan 13 20:31:45.960346 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Jan 13 20:31:45.960352 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Jan 13 20:31:45.960359 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Jan 13 20:31:45.960367 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Jan 13 20:31:45.960373 kernel: psci: probing for conduit method from ACPI.
Jan 13 20:31:45.960379 kernel: psci: PSCIv1.1 detected in firmware.
Jan 13 20:31:45.960388 kernel: psci: Using standard PSCI v0.2 function IDs
Jan 13 20:31:45.960395 kernel: psci: Trusted OS migration not required
Jan 13 20:31:45.960402 kernel: psci: SMC Calling Convention v1.1
Jan 13 20:31:45.960409 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Jan 13 20:31:45.960416 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Jan 13 20:31:45.960423 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Jan 13 20:31:45.960430 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Jan 13 20:31:45.960437 kernel: Detected PIPT I-cache on CPU0
Jan 13 20:31:45.960443 kernel: CPU features: detected: GIC system register CPU interface
Jan 13 20:31:45.960450 kernel: CPU features: detected: Hardware dirty bit management
Jan 13 20:31:45.960456 kernel: CPU features: detected: Spectre-v4
Jan 13 20:31:45.960463 kernel: CPU features: detected: Spectre-BHB
Jan 13 20:31:45.960470 kernel: CPU features: kernel page table isolation forced ON by KASLR
Jan 13 20:31:45.960478 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Jan 13 20:31:45.960485 kernel: CPU features: detected: ARM erratum 1418040
Jan 13 20:31:45.960491 kernel: CPU features: detected: SSBS not fully self-synchronizing
Jan 13 20:31:45.960498 kernel: alternatives: applying boot alternatives
Jan 13 20:31:45.960506 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436
Jan 13 20:31:45.960513 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 13 20:31:45.960519 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 13 20:31:45.960527 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 13 20:31:45.960533 kernel: Fallback order for Node 0: 0 
Jan 13 20:31:45.960540 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Jan 13 20:31:45.960546 kernel: Policy zone: DMA
Jan 13 20:31:45.960555 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 13 20:31:45.960561 kernel: software IO TLB: area num 4.
Jan 13 20:31:45.960568 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Jan 13 20:31:45.960575 kernel: Memory: 2386324K/2572288K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39680K init, 897K bss, 185964K reserved, 0K cma-reserved)
Jan 13 20:31:45.960581 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Jan 13 20:31:45.960588 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 13 20:31:45.960595 kernel: rcu:         RCU event tracing is enabled.
Jan 13 20:31:45.960602 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Jan 13 20:31:45.960609 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 13 20:31:45.960616 kernel:         Tracing variant of Tasks RCU enabled.
Jan 13 20:31:45.960622 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 13 20:31:45.960629 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Jan 13 20:31:45.960637 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Jan 13 20:31:45.960643 kernel: GICv3: 256 SPIs implemented
Jan 13 20:31:45.960650 kernel: GICv3: 0 Extended SPIs implemented
Jan 13 20:31:45.960656 kernel: Root IRQ handler: gic_handle_irq
Jan 13 20:31:45.960663 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Jan 13 20:31:45.960669 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Jan 13 20:31:45.960676 kernel: ITS [mem 0x08080000-0x0809ffff]
Jan 13 20:31:45.960682 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Jan 13 20:31:45.960689 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Jan 13 20:31:45.960696 kernel: GICv3: using LPI property table @0x00000000400f0000
Jan 13 20:31:45.960702 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Jan 13 20:31:45.960710 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 13 20:31:45.960717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:31:45.960723 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Jan 13 20:31:45.960730 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Jan 13 20:31:45.960737 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Jan 13 20:31:45.960744 kernel: arm-pv: using stolen time PV
Jan 13 20:31:45.960751 kernel: Console: colour dummy device 80x25
Jan 13 20:31:45.960757 kernel: ACPI: Core revision 20230628
Jan 13 20:31:45.960764 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Jan 13 20:31:45.960771 kernel: pid_max: default: 32768 minimum: 301
Jan 13 20:31:45.960779 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 13 20:31:45.960786 kernel: landlock: Up and running.
Jan 13 20:31:45.960793 kernel: SELinux:  Initializing.
Jan 13 20:31:45.960800 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:31:45.960807 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 13 20:31:45.960814 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 13 20:31:45.960821 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 13 20:31:45.960828 kernel: rcu: Hierarchical SRCU implementation.
Jan 13 20:31:45.960835 kernel: rcu:         Max phase no-delay instances is 400.
Jan 13 20:31:45.960841 kernel: Platform MSI: ITS@0x8080000 domain created
Jan 13 20:31:45.960849 kernel: PCI/MSI: ITS@0x8080000 domain created
Jan 13 20:31:45.960856 kernel: Remapping and enabling EFI services.
Jan 13 20:31:45.960863 kernel: smp: Bringing up secondary CPUs ...
Jan 13 20:31:45.960870 kernel: Detected PIPT I-cache on CPU1
Jan 13 20:31:45.960876 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Jan 13 20:31:45.960883 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Jan 13 20:31:45.960891 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:31:45.960897 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Jan 13 20:31:45.960911 kernel: Detected PIPT I-cache on CPU2
Jan 13 20:31:45.960929 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Jan 13 20:31:45.960938 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Jan 13 20:31:45.960950 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:31:45.960959 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Jan 13 20:31:45.960966 kernel: Detected PIPT I-cache on CPU3
Jan 13 20:31:45.960973 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Jan 13 20:31:45.960980 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Jan 13 20:31:45.960987 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 13 20:31:45.960994 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Jan 13 20:31:45.961003 kernel: smp: Brought up 1 node, 4 CPUs
Jan 13 20:31:45.961010 kernel: SMP: Total of 4 processors activated.
Jan 13 20:31:45.961018 kernel: CPU features: detected: 32-bit EL0 Support
Jan 13 20:31:45.961025 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Jan 13 20:31:45.961032 kernel: CPU features: detected: Common not Private translations
Jan 13 20:31:45.961040 kernel: CPU features: detected: CRC32 instructions
Jan 13 20:31:45.961047 kernel: CPU features: detected: Enhanced Virtualization Traps
Jan 13 20:31:45.961054 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Jan 13 20:31:45.961063 kernel: CPU features: detected: LSE atomic instructions
Jan 13 20:31:45.961070 kernel: CPU features: detected: Privileged Access Never
Jan 13 20:31:45.961077 kernel: CPU features: detected: RAS Extension Support
Jan 13 20:31:45.961084 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Jan 13 20:31:45.961092 kernel: CPU: All CPU(s) started at EL1
Jan 13 20:31:45.961099 kernel: alternatives: applying system-wide alternatives
Jan 13 20:31:45.961106 kernel: devtmpfs: initialized
Jan 13 20:31:45.961113 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 13 20:31:45.961121 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Jan 13 20:31:45.961129 kernel: pinctrl core: initialized pinctrl subsystem
Jan 13 20:31:45.961136 kernel: SMBIOS 3.0.0 present.
Jan 13 20:31:45.961143 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022
Jan 13 20:31:45.961151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 13 20:31:45.961158 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Jan 13 20:31:45.961165 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 13 20:31:45.961173 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 13 20:31:45.961180 kernel: audit: initializing netlink subsys (disabled)
Jan 13 20:31:45.961188 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1
Jan 13 20:31:45.961196 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 13 20:31:45.961203 kernel: cpuidle: using governor menu
Jan 13 20:31:45.961211 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Jan 13 20:31:45.961218 kernel: ASID allocator initialised with 32768 entries
Jan 13 20:31:45.961225 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 13 20:31:45.961232 kernel: Serial: AMBA PL011 UART driver
Jan 13 20:31:45.961239 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Jan 13 20:31:45.961247 kernel: Modules: 0 pages in range for non-PLT usage
Jan 13 20:31:45.961254 kernel: Modules: 508960 pages in range for PLT usage
Jan 13 20:31:45.961263 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 13 20:31:45.961270 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Jan 13 20:31:45.961277 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Jan 13 20:31:45.961285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Jan 13 20:31:45.961292 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 13 20:31:45.961299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Jan 13 20:31:45.961306 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Jan 13 20:31:45.961313 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Jan 13 20:31:45.961320 kernel: ACPI: Added _OSI(Module Device)
Jan 13 20:31:45.961329 kernel: ACPI: Added _OSI(Processor Device)
Jan 13 20:31:45.961336 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 13 20:31:45.961343 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 13 20:31:45.961350 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 13 20:31:45.961358 kernel: ACPI: Interpreter enabled
Jan 13 20:31:45.961365 kernel: ACPI: Using GIC for interrupt routing
Jan 13 20:31:45.961372 kernel: ACPI: MCFG table detected, 1 entries
Jan 13 20:31:45.961379 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Jan 13 20:31:45.961387 kernel: printk: console [ttyAMA0] enabled
Jan 13 20:31:45.961396 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 13 20:31:45.961546 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jan 13 20:31:45.961620 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Jan 13 20:31:45.961685 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Jan 13 20:31:45.961748 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Jan 13 20:31:45.961810 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Jan 13 20:31:45.961820 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Jan 13 20:31:45.961829 kernel: PCI host bridge to bus 0000:00
Jan 13 20:31:45.961899 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Jan 13 20:31:45.962000 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Jan 13 20:31:45.962058 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Jan 13 20:31:45.962115 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 13 20:31:45.962194 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Jan 13 20:31:45.962270 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Jan 13 20:31:45.962341 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Jan 13 20:31:45.962409 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Jan 13 20:31:45.962474 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 13 20:31:45.962538 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 13 20:31:45.962602 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Jan 13 20:31:45.962665 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Jan 13 20:31:45.962723 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Jan 13 20:31:45.962783 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Jan 13 20:31:45.962843 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Jan 13 20:31:45.962853 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Jan 13 20:31:45.962861 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Jan 13 20:31:45.962868 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Jan 13 20:31:45.962876 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Jan 13 20:31:45.962883 kernel: iommu: Default domain type: Translated
Jan 13 20:31:45.962891 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Jan 13 20:31:45.962900 kernel: efivars: Registered efivars operations
Jan 13 20:31:45.962918 kernel: vgaarb: loaded
Jan 13 20:31:45.962944 kernel: clocksource: Switched to clocksource arch_sys_counter
Jan 13 20:31:45.962952 kernel: VFS: Disk quotas dquot_6.6.0
Jan 13 20:31:45.962960 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 13 20:31:45.962968 kernel: pnp: PnP ACPI init
Jan 13 20:31:45.963067 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Jan 13 20:31:45.963080 kernel: pnp: PnP ACPI: found 1 devices
Jan 13 20:31:45.963091 kernel: NET: Registered PF_INET protocol family
Jan 13 20:31:45.963099 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 13 20:31:45.963106 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Jan 13 20:31:45.963114 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 13 20:31:45.963121 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 13 20:31:45.963129 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Jan 13 20:31:45.963136 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Jan 13 20:31:45.963143 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:31:45.963151 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 13 20:31:45.963160 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 13 20:31:45.963167 kernel: PCI: CLS 0 bytes, default 64
Jan 13 20:31:45.963175 kernel: kvm [1]: HYP mode not available
Jan 13 20:31:45.963182 kernel: Initialise system trusted keyrings
Jan 13 20:31:45.963189 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Jan 13 20:31:45.963197 kernel: Key type asymmetric registered
Jan 13 20:31:45.963204 kernel: Asymmetric key parser 'x509' registered
Jan 13 20:31:45.963211 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Jan 13 20:31:45.963219 kernel: io scheduler mq-deadline registered
Jan 13 20:31:45.963228 kernel: io scheduler kyber registered
Jan 13 20:31:45.963235 kernel: io scheduler bfq registered
Jan 13 20:31:45.963243 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jan 13 20:31:45.963250 kernel: ACPI: button: Power Button [PWRB]
Jan 13 20:31:45.963259 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Jan 13 20:31:45.963332 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Jan 13 20:31:45.963343 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 13 20:31:45.963351 kernel: thunder_xcv, ver 1.0
Jan 13 20:31:45.963358 kernel: thunder_bgx, ver 1.0
Jan 13 20:31:45.963368 kernel: nicpf, ver 1.0
Jan 13 20:31:45.963375 kernel: nicvf, ver 1.0
Jan 13 20:31:45.963451 kernel: rtc-efi rtc-efi.0: registered as rtc0
Jan 13 20:31:45.963516 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-13T20:31:45 UTC (1736800305)
Jan 13 20:31:45.963525 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 13 20:31:45.963533 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Jan 13 20:31:45.963541 kernel: watchdog: Delayed init of the lockup detector failed: -19
Jan 13 20:31:45.963548 kernel: watchdog: Hard watchdog permanently disabled
Jan 13 20:31:45.963557 kernel: NET: Registered PF_INET6 protocol family
Jan 13 20:31:45.963565 kernel: Segment Routing with IPv6
Jan 13 20:31:45.963572 kernel: In-situ OAM (IOAM) with IPv6
Jan 13 20:31:45.963580 kernel: NET: Registered PF_PACKET protocol family
Jan 13 20:31:45.963587 kernel: Key type dns_resolver registered
Jan 13 20:31:45.963594 kernel: registered taskstats version 1
Jan 13 20:31:45.963601 kernel: Loading compiled-in X.509 certificates
Jan 13 20:31:45.963609 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: a9edf9d44b1b82dedf7830d1843430df7c4d16cb'
Jan 13 20:31:45.963616 kernel: Key type .fscrypt registered
Jan 13 20:31:45.963625 kernel: Key type fscrypt-provisioning registered
Jan 13 20:31:45.963633 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 13 20:31:45.963640 kernel: ima: Allocated hash algorithm: sha1
Jan 13 20:31:45.963647 kernel: ima: No architecture policies found
Jan 13 20:31:45.963655 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Jan 13 20:31:45.963662 kernel: clk: Disabling unused clocks
Jan 13 20:31:45.963670 kernel: Freeing unused kernel memory: 39680K
Jan 13 20:31:45.963677 kernel: Run /init as init process
Jan 13 20:31:45.963684 kernel:   with arguments:
Jan 13 20:31:45.963693 kernel:     /init
Jan 13 20:31:45.963700 kernel:   with environment:
Jan 13 20:31:45.963707 kernel:     HOME=/
Jan 13 20:31:45.963714 kernel:     TERM=linux
Jan 13 20:31:45.963722 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 13 20:31:45.963731 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:31:45.963741 systemd[1]: Detected virtualization kvm.
Jan 13 20:31:45.963749 systemd[1]: Detected architecture arm64.
Jan 13 20:31:45.963758 systemd[1]: Running in initrd.
Jan 13 20:31:45.963766 systemd[1]: No hostname configured, using default hostname.
Jan 13 20:31:45.963774 systemd[1]: Hostname set to <localhost>.
Jan 13 20:31:45.963783 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:31:45.963790 systemd[1]: Queued start job for default target initrd.target.
Jan 13 20:31:45.963798 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:31:45.963806 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:31:45.963815 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 13 20:31:45.963825 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:31:45.963833 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 13 20:31:45.963841 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 13 20:31:45.963851 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 13 20:31:45.963859 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 13 20:31:45.963867 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:31:45.963875 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:31:45.963885 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:31:45.963893 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:31:45.963901 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:31:45.963919 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:31:45.963938 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:31:45.963946 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:31:45.963955 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 13 20:31:45.963963 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 13 20:31:45.963974 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:31:45.963982 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:31:45.963990 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:31:45.963998 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:31:45.964006 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 13 20:31:45.964014 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:31:45.964022 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 13 20:31:45.964030 systemd[1]: Starting systemd-fsck-usr.service...
Jan 13 20:31:45.964038 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:31:45.964048 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:31:45.964056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:31:45.964064 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 13 20:31:45.964072 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:31:45.964080 systemd[1]: Finished systemd-fsck-usr.service.
Jan 13 20:31:45.964089 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 13 20:31:45.964123 systemd-journald[238]: Collecting audit messages is disabled.
Jan 13 20:31:45.964144 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:31:45.964155 systemd-journald[238]: Journal started
Jan 13 20:31:45.964173 systemd-journald[238]: Runtime Journal (/run/log/journal/4057baae8e8a459c91329daef10d2635) is 5.9M, max 47.3M, 41.4M free.
Jan 13 20:31:45.955985 systemd-modules-load[239]: Inserted module 'overlay'
Jan 13 20:31:45.969475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:31:45.971590 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:31:45.971610 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 13 20:31:45.974971 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 13 20:31:45.976939 kernel: Bridge firewalling registered
Jan 13 20:31:45.975434 systemd-modules-load[239]: Inserted module 'br_netfilter'
Jan 13 20:31:45.978385 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:31:45.995252 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:31:45.999175 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:31:46.002787 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:31:46.004403 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:31:46.009899 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:31:46.030197 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 13 20:31:46.032212 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:31:46.034596 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:31:46.037967 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:31:46.042017 dracut-cmdline[276]: dracut-dracut-053
Jan 13 20:31:46.044005 dracut-cmdline[276]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=6ba5f90349644346e4f5fa9305ab5a05339928ee9f4f137665e797727c1fc436
Jan 13 20:31:46.068915 systemd-resolved[284]: Positive Trust Anchors:
Jan 13 20:31:46.069007 systemd-resolved[284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:31:46.069039 systemd-resolved[284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:31:46.073807 systemd-resolved[284]: Defaulting to hostname 'linux'.
Jan 13 20:31:46.074829 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:31:46.079366 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:31:46.125963 kernel: SCSI subsystem initialized
Jan 13 20:31:46.131077 kernel: Loading iSCSI transport class v2.0-870.
Jan 13 20:31:46.139968 kernel: iscsi: registered transport (tcp)
Jan 13 20:31:46.153951 kernel: iscsi: registered transport (qla4xxx)
Jan 13 20:31:46.153994 kernel: QLogic iSCSI HBA Driver
Jan 13 20:31:46.204913 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:31:46.221126 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 13 20:31:46.239301 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 13 20:31:46.239344 kernel: device-mapper: uevent: version 1.0.3
Jan 13 20:31:46.240470 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 13 20:31:46.289951 kernel: raid6: neonx8   gen() 15732 MB/s
Jan 13 20:31:46.304952 kernel: raid6: neonx4   gen() 15572 MB/s
Jan 13 20:31:46.321944 kernel: raid6: neonx2   gen() 13182 MB/s
Jan 13 20:31:46.338942 kernel: raid6: neonx1   gen() 10479 MB/s
Jan 13 20:31:46.355942 kernel: raid6: int64x8  gen()  6960 MB/s
Jan 13 20:31:46.372936 kernel: raid6: int64x4  gen()  7343 MB/s
Jan 13 20:31:46.389945 kernel: raid6: int64x2  gen()  6109 MB/s
Jan 13 20:31:46.407129 kernel: raid6: int64x1  gen()  5017 MB/s
Jan 13 20:31:46.407145 kernel: raid6: using algorithm neonx8 gen() 15732 MB/s
Jan 13 20:31:46.425072 kernel: raid6: .... xor() 11930 MB/s, rmw enabled
Jan 13 20:31:46.425091 kernel: raid6: using neon recovery algorithm
Jan 13 20:31:46.429942 kernel: xor: measuring software checksum speed
Jan 13 20:31:46.431218 kernel:    8regs           : 16863 MB/sec
Jan 13 20:31:46.431231 kernel:    32regs          : 19631 MB/sec
Jan 13 20:31:46.434261 kernel:    arm64_neon      :  1721 MB/sec
Jan 13 20:31:46.434275 kernel: xor: using function: 32regs (19631 MB/sec)
Jan 13 20:31:46.483946 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 13 20:31:46.494500 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:31:46.508110 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:31:46.520231 systemd-udevd[462]: Using default interface naming scheme 'v255'.
Jan 13 20:31:46.523394 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:31:46.526744 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 13 20:31:46.541043 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation
Jan 13 20:31:46.568558 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:31:46.581070 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:31:46.621108 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:31:46.630118 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 13 20:31:46.643340 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:31:46.644630 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:31:46.646793 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:31:46.649796 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:31:46.657269 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 13 20:31:46.663939 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Jan 13 20:31:46.676290 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Jan 13 20:31:46.676392 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 13 20:31:46.676403 kernel: GPT:9289727 != 19775487
Jan 13 20:31:46.676412 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 13 20:31:46.676421 kernel: GPT:9289727 != 19775487
Jan 13 20:31:46.676430 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 13 20:31:46.676445 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 13 20:31:46.671132 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:31:46.677189 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:31:46.677308 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:31:46.679962 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:31:46.682911 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:31:46.683038 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:31:46.685752 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:31:46.697949 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (507)
Jan 13 20:31:46.699948 kernel: BTRFS: device fsid 8e09fced-e016-4c4f-bac5-4013d13dfd78 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (509)
Jan 13 20:31:46.703144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:31:46.713595 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:31:46.718494 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Jan 13 20:31:46.722997 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Jan 13 20:31:46.730134 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 13 20:31:46.733974 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Jan 13 20:31:46.735176 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Jan 13 20:31:46.751078 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 13 20:31:46.756099 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 13 20:31:46.758584 disk-uuid[551]: Primary Header is updated.
Jan 13 20:31:46.758584 disk-uuid[551]: Secondary Entries is updated.
Jan 13 20:31:46.758584 disk-uuid[551]: Secondary Header is updated.
Jan 13 20:31:46.763481 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 13 20:31:46.765946 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 13 20:31:46.778096 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:31:47.769289 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 13 20:31:47.769355 disk-uuid[552]: The operation has completed successfully.
Jan 13 20:31:47.791757 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 13 20:31:47.791864 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 13 20:31:47.811124 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 13 20:31:47.814010 sh[573]: Success
Jan 13 20:31:47.829957 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Jan 13 20:31:47.869381 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 13 20:31:47.871357 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 13 20:31:47.872501 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 13 20:31:47.885029 kernel: BTRFS info (device dm-0): first mount of filesystem 8e09fced-e016-4c4f-bac5-4013d13dfd78
Jan 13 20:31:47.885063 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:31:47.886223 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 13 20:31:47.886238 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 13 20:31:47.887932 kernel: BTRFS info (device dm-0): using free space tree
Jan 13 20:31:47.891381 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 13 20:31:47.892870 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 13 20:31:47.902078 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 13 20:31:47.903791 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 13 20:31:47.913715 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:31:47.913754 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:31:47.913770 kernel: BTRFS info (device vda6): using free space tree
Jan 13 20:31:47.919051 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 13 20:31:47.926011 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 13 20:31:47.927994 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:31:47.933870 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 13 20:31:47.942132 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 13 20:31:48.019150 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:31:48.028099 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:31:48.055634 ignition[666]: Ignition 2.20.0
Jan 13 20:31:48.055644 ignition[666]: Stage: fetch-offline
Jan 13 20:31:48.055679 ignition[666]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:31:48.055697 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:31:48.060242 systemd-networkd[763]: lo: Link UP
Jan 13 20:31:48.055838 ignition[666]: parsed url from cmdline: ""
Jan 13 20:31:48.060245 systemd-networkd[763]: lo: Gained carrier
Jan 13 20:31:48.055841 ignition[666]: no config URL provided
Jan 13 20:31:48.061081 systemd-networkd[763]: Enumeration completed
Jan 13 20:31:48.055846 ignition[666]: reading system config file "/usr/lib/ignition/user.ign"
Jan 13 20:31:48.061465 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:31:48.055853 ignition[666]: no config at "/usr/lib/ignition/user.ign"
Jan 13 20:31:48.061467 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:31:48.055878 ignition[666]: op(1): [started]  loading QEMU firmware config module
Jan 13 20:31:48.062157 systemd-networkd[763]: eth0: Link UP
Jan 13 20:31:48.055882 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg"
Jan 13 20:31:48.062160 systemd-networkd[763]: eth0: Gained carrier
Jan 13 20:31:48.060634 ignition[666]: op(1): [finished] loading QEMU firmware config module
Jan 13 20:31:48.062167 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:31:48.060655 ignition[666]: QEMU firmware config was not found. Ignoring...
Jan 13 20:31:48.063145 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:31:48.064888 systemd[1]: Reached target network.target - Network.
Jan 13 20:31:48.077968 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 13 20:31:48.115505 ignition[666]: parsing config with SHA512: d9f18c4a3549b66dd90be3d37675333b3a9d8898c9c9d99c3ef3cec1f07bbb68a17d5acef4903ddd4bcd73737e8a51fe9dadb89ef81f887423a1da211276701a
Jan 13 20:31:48.120836 unknown[666]: fetched base config from "system"
Jan 13 20:31:48.120853 unknown[666]: fetched user config from "qemu"
Jan 13 20:31:48.121494 ignition[666]: fetch-offline: fetch-offline passed
Jan 13 20:31:48.123554 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:31:48.121717 ignition[666]: Ignition finished successfully
Jan 13 20:31:48.125045 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Jan 13 20:31:48.132064 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 13 20:31:48.142671 ignition[770]: Ignition 2.20.0
Jan 13 20:31:48.142681 ignition[770]: Stage: kargs
Jan 13 20:31:48.142837 ignition[770]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:31:48.142846 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:31:48.143730 ignition[770]: kargs: kargs passed
Jan 13 20:31:48.147513 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 13 20:31:48.143770 ignition[770]: Ignition finished successfully
Jan 13 20:31:48.156134 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 13 20:31:48.166395 ignition[779]: Ignition 2.20.0
Jan 13 20:31:48.167259 ignition[779]: Stage: disks
Jan 13 20:31:48.167434 ignition[779]: no configs at "/usr/lib/ignition/base.d"
Jan 13 20:31:48.169674 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 13 20:31:48.167445 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:31:48.171135 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 13 20:31:48.168344 ignition[779]: disks: disks passed
Jan 13 20:31:48.172840 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 13 20:31:48.168393 ignition[779]: Ignition finished successfully
Jan 13 20:31:48.174909 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:31:48.176811 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:31:48.178384 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:31:48.196136 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 13 20:31:48.208502 systemd-fsck[789]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Jan 13 20:31:48.213383 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 13 20:31:48.215756 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 13 20:31:48.268946 kernel: EXT4-fs (vda9): mounted filesystem 8fd847fb-a6be-44f6-9adf-0a0a79b9fa94 r/w with ordered data mode. Quota mode: none.
Jan 13 20:31:48.269721 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 13 20:31:48.271154 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:31:48.284002 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:31:48.285668 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 13 20:31:48.287112 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Jan 13 20:31:48.287151 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 13 20:31:48.297026 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (797)
Jan 13 20:31:48.297049 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:31:48.297060 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:31:48.297069 kernel: BTRFS info (device vda6): using free space tree
Jan 13 20:31:48.287171 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:31:48.301012 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 13 20:31:48.291704 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 13 20:31:48.293536 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 13 20:31:48.302496 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:31:48.355934 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory
Jan 13 20:31:48.371512 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory
Jan 13 20:31:48.375687 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory
Jan 13 20:31:48.379880 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 13 20:31:48.458549 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 13 20:31:48.466060 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 13 20:31:48.468457 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 13 20:31:48.473952 kernel: BTRFS info (device vda6): last unmount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:31:48.488546 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 13 20:31:48.491847 ignition[911]: INFO     : Ignition 2.20.0
Jan 13 20:31:48.491847 ignition[911]: INFO     : Stage: mount
Jan 13 20:31:48.494082 ignition[911]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:31:48.494082 ignition[911]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:31:48.494082 ignition[911]: INFO     : mount: mount passed
Jan 13 20:31:48.494082 ignition[911]: INFO     : Ignition finished successfully
Jan 13 20:31:48.494605 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 13 20:31:48.506061 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 13 20:31:48.883749 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 13 20:31:48.896119 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 13 20:31:48.902861 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (924)
Jan 13 20:31:48.902892 kernel: BTRFS info (device vda6): first mount of filesystem cd0b9c1b-856d-4823-9d4d-1660845d57c6
Jan 13 20:31:48.902908 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 13 20:31:48.903807 kernel: BTRFS info (device vda6): using free space tree
Jan 13 20:31:48.906960 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 13 20:31:48.907707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 13 20:31:48.923495 ignition[941]: INFO     : Ignition 2.20.0
Jan 13 20:31:48.923495 ignition[941]: INFO     : Stage: files
Jan 13 20:31:48.925181 ignition[941]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:31:48.925181 ignition[941]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:31:48.925181 ignition[941]: DEBUG    : files: compiled without relabeling support, skipping
Jan 13 20:31:48.928633 ignition[941]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 13 20:31:48.928633 ignition[941]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 13 20:31:48.928633 ignition[941]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 13 20:31:48.928633 ignition[941]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 13 20:31:48.928633 ignition[941]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 13 20:31:48.927872 unknown[941]: wrote ssh authorized keys file for user: core
Jan 13 20:31:48.936150 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Jan 13 20:31:48.936150 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1
Jan 13 20:31:48.973732 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(3): GET result: OK
Jan 13 20:31:49.284410 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz"
Jan 13 20:31:49.284410 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/home/core/install.sh"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing file "/sysroot/home/core/nginx.yaml"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [started]  writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Jan 13 20:31:49.288216 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1
Jan 13 20:31:49.604497 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(a): GET result: OK
Jan 13 20:31:49.867711 ignition[941]: INFO     : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw"
Jan 13 20:31:49.867711 ignition[941]: INFO     : files: op(b): [started]  processing unit "prepare-helm.service"
Jan 13 20:31:49.871309 ignition[941]: INFO     : files: op(b): op(c): [started]  writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 13 20:31:49.871309 ignition[941]: INFO     : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service"
Jan 13 20:31:49.871309 ignition[941]: INFO     : files: op(b): [finished] processing unit "prepare-helm.service"
Jan 13 20:31:49.871309 ignition[941]: INFO     : files: op(d): [started]  processing unit "coreos-metadata.service"
Jan 13 20:31:49.871309 ignition[941]: INFO     : files: op(d): op(e): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 13 20:31:49.871309 ignition[941]: INFO     : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 13 20:31:49.871309 ignition[941]: INFO     : files: op(d): [finished] processing unit "coreos-metadata.service"
Jan 13 20:31:49.871309 ignition[941]: INFO     : files: op(f): [started]  setting preset to disabled for "coreos-metadata.service"
Jan 13 20:31:49.893079 ignition[941]: INFO     : files: op(f): op(10): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Jan 13 20:31:49.897020 ignition[941]: INFO     : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Jan 13 20:31:49.898681 ignition[941]: INFO     : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service"
Jan 13 20:31:49.898681 ignition[941]: INFO     : files: op(11): [started]  setting preset to enabled for "prepare-helm.service"
Jan 13 20:31:49.898681 ignition[941]: INFO     : files: op(11): [finished] setting preset to enabled for "prepare-helm.service"
Jan 13 20:31:49.898681 ignition[941]: INFO     : files: createResultFile: createFiles: op(12): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:31:49.898681 ignition[941]: INFO     : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 13 20:31:49.898681 ignition[941]: INFO     : files: files passed
Jan 13 20:31:49.898681 ignition[941]: INFO     : Ignition finished successfully
Jan 13 20:31:49.900118 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 13 20:31:49.913089 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 13 20:31:49.915058 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 13 20:31:49.916660 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 13 20:31:49.916766 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 13 20:31:49.924704 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory
Jan 13 20:31:49.927030 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:31:49.927030 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:31:49.930383 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 13 20:31:49.930761 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:31:49.933621 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 13 20:31:49.939151 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 13 20:31:49.960234 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 13 20:31:49.960344 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 13 20:31:49.962636 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 13 20:31:49.963664 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 13 20:31:49.965688 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 13 20:31:49.966498 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 13 20:31:49.982968 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:31:49.985498 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 13 20:31:49.996504 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:31:49.997749 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:31:49.999820 systemd[1]: Stopped target timers.target - Timer Units.
Jan 13 20:31:50.001641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 13 20:31:50.001771 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 13 20:31:50.004294 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 13 20:31:50.006360 systemd[1]: Stopped target basic.target - Basic System.
Jan 13 20:31:50.008040 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 13 20:31:50.009826 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 13 20:31:50.011850 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 13 20:31:50.013861 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 13 20:31:50.015741 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 13 20:31:50.017693 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 13 20:31:50.019641 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 13 20:31:50.021425 systemd[1]: Stopped target swap.target - Swaps.
Jan 13 20:31:50.022964 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 13 20:31:50.023105 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 13 20:31:50.025471 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:31:50.027431 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:31:50.033265 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 13 20:31:50.038490 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:31:50.039813 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 13 20:31:50.039969 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 13 20:31:50.049473 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 13 20:31:50.049608 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 13 20:31:50.051597 systemd[1]: Stopped target paths.target - Path Units.
Jan 13 20:31:50.053152 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 13 20:31:50.058010 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:31:50.059318 systemd[1]: Stopped target slices.target - Slice Units.
Jan 13 20:31:50.061490 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 13 20:31:50.063053 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 13 20:31:50.063145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 13 20:31:50.064672 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 13 20:31:50.064754 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 13 20:31:50.066283 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 13 20:31:50.066395 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 13 20:31:50.068148 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 13 20:31:50.068248 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 13 20:31:50.073187 systemd-networkd[763]: eth0: Gained IPv6LL
Jan 13 20:31:50.078171 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 13 20:31:50.079691 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 13 20:31:50.079826 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:31:50.082450 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 13 20:31:50.083295 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 13 20:31:50.083418 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:31:50.089108 ignition[997]: INFO     : Ignition 2.20.0
Jan 13 20:31:50.089108 ignition[997]: INFO     : Stage: umount
Jan 13 20:31:50.089108 ignition[997]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 13 20:31:50.089108 ignition[997]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 13 20:31:50.089108 ignition[997]: INFO     : umount: umount passed
Jan 13 20:31:50.089108 ignition[997]: INFO     : Ignition finished successfully
Jan 13 20:31:50.085509 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 13 20:31:50.085602 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 13 20:31:50.091524 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 13 20:31:50.093247 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 13 20:31:50.095113 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 13 20:31:50.095200 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 13 20:31:50.098098 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 13 20:31:50.098514 systemd[1]: Stopped target network.target - Network.
Jan 13 20:31:50.100095 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 13 20:31:50.100168 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 13 20:31:50.102063 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 13 20:31:50.102115 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 13 20:31:50.103978 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 13 20:31:50.104026 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 13 20:31:50.105820 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 13 20:31:50.105867 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 13 20:31:50.107869 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 13 20:31:50.109533 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 13 20:31:50.115790 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 13 20:31:50.115937 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 13 20:31:50.118293 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 13 20:31:50.118347 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:31:50.120065 systemd-networkd[763]: eth0: DHCPv6 lease lost
Jan 13 20:31:50.121830 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 13 20:31:50.122006 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 13 20:31:50.125420 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 13 20:31:50.125455 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:31:50.137039 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 13 20:31:50.137917 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 13 20:31:50.138001 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 13 20:31:50.140071 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 13 20:31:50.140117 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:31:50.142140 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 13 20:31:50.142186 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:31:50.144377 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:31:50.154604 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 13 20:31:50.154766 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:31:50.157130 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 13 20:31:50.157208 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 13 20:31:50.158308 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 13 20:31:50.158379 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 13 20:31:50.160861 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 13 20:31:50.160958 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:31:50.162022 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 13 20:31:50.162059 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:31:50.163754 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 13 20:31:50.163800 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 13 20:31:50.166455 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 13 20:31:50.166502 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 13 20:31:50.169290 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 13 20:31:50.169339 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 13 20:31:50.172233 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 13 20:31:50.172283 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 13 20:31:50.190162 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 13 20:31:50.191311 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 13 20:31:50.191385 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:31:50.193659 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 13 20:31:50.193709 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:31:50.196052 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 13 20:31:50.196158 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 13 20:31:50.199494 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 13 20:31:50.201737 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 13 20:31:50.210932 systemd[1]: Switching root.
Jan 13 20:31:50.239126 systemd-journald[238]: Journal stopped
Jan 13 20:31:50.945865 systemd-journald[238]: Received SIGTERM from PID 1 (systemd).
Jan 13 20:31:50.945955 kernel: SELinux:  policy capability network_peer_controls=1
Jan 13 20:31:50.945970 kernel: SELinux:  policy capability open_perms=1
Jan 13 20:31:50.945983 kernel: SELinux:  policy capability extended_socket_class=1
Jan 13 20:31:50.945997 kernel: SELinux:  policy capability always_check_network=0
Jan 13 20:31:50.946007 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 13 20:31:50.946016 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 13 20:31:50.946025 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 13 20:31:50.946034 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 13 20:31:50.946044 kernel: audit: type=1403 audit(1736800310.375:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 13 20:31:50.946054 systemd[1]: Successfully loaded SELinux policy in 31.596ms.
Jan 13 20:31:50.946075 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.569ms.
Jan 13 20:31:50.946088 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 13 20:31:50.946099 systemd[1]: Detected virtualization kvm.
Jan 13 20:31:50.946110 systemd[1]: Detected architecture arm64.
Jan 13 20:31:50.946120 systemd[1]: Detected first boot.
Jan 13 20:31:50.946130 systemd[1]: Initializing machine ID from VM UUID.
Jan 13 20:31:50.946140 zram_generator::config[1041]: No configuration found.
Jan 13 20:31:50.946155 systemd[1]: Populated /etc with preset unit settings.
Jan 13 20:31:50.946165 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 13 20:31:50.946176 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 13 20:31:50.946189 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:31:50.946200 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 13 20:31:50.946210 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 13 20:31:50.946227 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 13 20:31:50.946237 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 13 20:31:50.946248 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 13 20:31:50.946259 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 13 20:31:50.946270 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 13 20:31:50.946282 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 13 20:31:50.946293 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 13 20:31:50.946303 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 13 20:31:50.946314 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 13 20:31:50.946324 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 13 20:31:50.946335 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 13 20:31:50.946346 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 13 20:31:50.946357 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Jan 13 20:31:50.946367 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 13 20:31:50.946379 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 13 20:31:50.946390 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 13 20:31:50.946400 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 13 20:31:50.946411 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 13 20:31:50.946422 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 13 20:31:50.946432 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 13 20:31:50.946442 systemd[1]: Reached target slices.target - Slice Units.
Jan 13 20:31:50.946454 systemd[1]: Reached target swap.target - Swaps.
Jan 13 20:31:50.946466 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 13 20:31:50.946476 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 13 20:31:50.946487 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 13 20:31:50.946497 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 13 20:31:50.946507 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 13 20:31:50.946518 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 13 20:31:50.946528 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 13 20:31:50.946539 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 13 20:31:50.946549 systemd[1]: Mounting media.mount - External Media Directory...
Jan 13 20:31:50.946561 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 13 20:31:50.946572 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 13 20:31:50.946582 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 13 20:31:50.946593 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 13 20:31:50.946603 systemd[1]: Reached target machines.target - Containers.
Jan 13 20:31:50.946613 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 13 20:31:50.946624 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:31:50.946634 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 13 20:31:50.946647 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 13 20:31:50.946657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:31:50.946668 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:31:50.946679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:31:50.946689 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 13 20:31:50.946700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:31:50.946711 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 13 20:31:50.946721 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 13 20:31:50.946731 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 13 20:31:50.946744 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 13 20:31:50.946754 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 13 20:31:50.946764 kernel: loop: module loaded
Jan 13 20:31:50.946773 kernel: fuse: init (API version 7.39)
Jan 13 20:31:50.946783 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 13 20:31:50.946794 kernel: ACPI: bus type drm_connector registered
Jan 13 20:31:50.946804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 13 20:31:50.946814 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 13 20:31:50.946825 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 13 20:31:50.946837 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 13 20:31:50.946847 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 13 20:31:50.946858 systemd[1]: Stopped verity-setup.service.
Jan 13 20:31:50.946868 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 13 20:31:50.946878 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 13 20:31:50.946888 systemd[1]: Mounted media.mount - External Media Directory.
Jan 13 20:31:50.946935 systemd-journald[1112]: Collecting audit messages is disabled.
Jan 13 20:31:50.946959 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 13 20:31:50.946973 systemd-journald[1112]: Journal started
Jan 13 20:31:50.946994 systemd-journald[1112]: Runtime Journal (/run/log/journal/4057baae8e8a459c91329daef10d2635) is 5.9M, max 47.3M, 41.4M free.
Jan 13 20:31:50.720568 systemd[1]: Queued start job for default target multi-user.target.
Jan 13 20:31:50.737912 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Jan 13 20:31:50.738284 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 13 20:31:50.948960 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 13 20:31:50.950993 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 13 20:31:50.951492 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 13 20:31:50.952760 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 13 20:31:50.955384 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 13 20:31:50.956870 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 13 20:31:50.957044 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 13 20:31:50.959454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:31:50.959604 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:31:50.961034 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:31:50.961169 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:31:50.962542 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:31:50.963963 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:31:50.965600 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 13 20:31:50.965744 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 13 20:31:50.967272 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:31:50.967420 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:31:50.968790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 13 20:31:50.971264 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 13 20:31:50.972751 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 13 20:31:50.985504 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 13 20:31:50.989647 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 13 20:31:51.003036 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 13 20:31:51.005227 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 13 20:31:51.006341 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 13 20:31:51.006383 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 13 20:31:51.008394 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Jan 13 20:31:51.010701 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 13 20:31:51.012866 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 13 20:31:51.014068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:31:51.015916 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 13 20:31:51.018542 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 13 20:31:51.019941 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:31:51.021492 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 13 20:31:51.026322 systemd-journald[1112]: Time spent on flushing to /var/log/journal/4057baae8e8a459c91329daef10d2635 is 35.884ms for 854 entries.
Jan 13 20:31:51.026322 systemd-journald[1112]: System Journal (/var/log/journal/4057baae8e8a459c91329daef10d2635) is 8.0M, max 195.6M, 187.6M free.
Jan 13 20:31:51.071476 systemd-journald[1112]: Received client request to flush runtime journal.
Jan 13 20:31:51.071520 kernel: loop0: detected capacity change from 0 to 189592
Jan 13 20:31:51.027418 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:31:51.028478 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 13 20:31:51.032347 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 13 20:31:51.039139 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 13 20:31:51.041983 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 13 20:31:51.045045 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 13 20:31:51.047190 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 13 20:31:51.048656 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 13 20:31:51.051373 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 13 20:31:51.054807 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 13 20:31:51.067116 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jan 13 20:31:51.073383 udevadm[1157]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in.
Jan 13 20:31:51.075026 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 13 20:31:51.077962 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 13 20:31:51.085086 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 13 20:31:51.094941 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 13 20:31:51.098163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 13 20:31:51.102801 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 13 20:31:51.103528 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jan 13 20:31:51.119216 systemd-tmpfiles[1171]: ACLs are not supported, ignoring.
Jan 13 20:31:51.119235 systemd-tmpfiles[1171]: ACLs are not supported, ignoring.
Jan 13 20:31:51.123750 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 13 20:31:51.124947 kernel: loop1: detected capacity change from 0 to 116808
Jan 13 20:31:51.178971 kernel: loop2: detected capacity change from 0 to 113536
Jan 13 20:31:51.238163 kernel: loop3: detected capacity change from 0 to 189592
Jan 13 20:31:51.244281 kernel: loop4: detected capacity change from 0 to 116808
Jan 13 20:31:51.249992 kernel: loop5: detected capacity change from 0 to 113536
Jan 13 20:31:51.252933 (sd-merge)[1179]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Jan 13 20:31:51.253325 (sd-merge)[1179]: Merged extensions into '/usr'.
Jan 13 20:31:51.256913 systemd[1]: Reloading requested from client PID 1154 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 13 20:31:51.256936 systemd[1]: Reloading...
Jan 13 20:31:51.314955 zram_generator::config[1205]: No configuration found.
Jan 13 20:31:51.340846 ldconfig[1149]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 13 20:31:51.407191 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:31:51.441941 systemd[1]: Reloading finished in 184 ms.
Jan 13 20:31:51.473963 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 13 20:31:51.475599 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 13 20:31:51.491144 systemd[1]: Starting ensure-sysext.service...
Jan 13 20:31:51.493020 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 13 20:31:51.504031 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)...
Jan 13 20:31:51.504054 systemd[1]: Reloading...
Jan 13 20:31:51.513811 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 13 20:31:51.514146 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 13 20:31:51.514912 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 13 20:31:51.515170 systemd-tmpfiles[1240]: ACLs are not supported, ignoring.
Jan 13 20:31:51.515226 systemd-tmpfiles[1240]: ACLs are not supported, ignoring.
Jan 13 20:31:51.517363 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:31:51.517377 systemd-tmpfiles[1240]: Skipping /boot
Jan 13 20:31:51.524124 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot.
Jan 13 20:31:51.524143 systemd-tmpfiles[1240]: Skipping /boot
Jan 13 20:31:51.554169 zram_generator::config[1263]: No configuration found.
Jan 13 20:31:51.632900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:31:51.668115 systemd[1]: Reloading finished in 163 ms.
Jan 13 20:31:51.688201 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 13 20:31:51.700437 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 13 20:31:51.708360 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:31:51.712552 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 13 20:31:51.715114 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 13 20:31:51.719208 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 13 20:31:51.726505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 13 20:31:51.729615 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 13 20:31:51.732842 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:31:51.736216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:31:51.742075 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:31:51.747194 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:31:51.748340 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:31:51.751724 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 13 20:31:51.753987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:31:51.755983 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:31:51.757690 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:31:51.757812 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:31:51.759775 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 13 20:31:51.761733 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:31:51.761864 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:31:51.766448 systemd-udevd[1311]: Using default interface naming scheme 'v255'.
Jan 13 20:31:51.769786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:31:51.784285 augenrules[1337]: No rules
Jan 13 20:31:51.785249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:31:51.788495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:31:51.793185 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:31:51.794491 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:31:51.798259 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 13 20:31:51.800073 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 13 20:31:51.801966 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:31:51.802156 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:31:51.803607 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 13 20:31:51.805949 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:31:51.806135 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:31:51.808410 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:31:51.808538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:31:51.813520 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:31:51.816001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:31:51.817563 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 13 20:31:51.828334 systemd[1]: Finished ensure-sysext.service.
Jan 13 20:31:51.829532 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 13 20:31:51.836316 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 13 20:31:51.839256 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Jan 13 20:31:51.852054 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1348)
Jan 13 20:31:51.855141 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:31:51.856411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 13 20:31:51.861953 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 13 20:31:51.864080 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 13 20:31:51.867325 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 13 20:31:51.873152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 13 20:31:51.874376 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 13 20:31:51.879114 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 13 20:31:51.885127 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Jan 13 20:31:51.886256 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 13 20:31:51.886745 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 13 20:31:51.886882 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 13 20:31:51.888343 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 13 20:31:51.888487 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 13 20:31:51.889137 augenrules[1376]: /sbin/augenrules: No change
Jan 13 20:31:51.896048 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 13 20:31:51.897959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 13 20:31:51.900534 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 13 20:31:51.900707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 13 20:31:51.908445 augenrules[1410]: No rules
Jan 13 20:31:51.911412 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:31:51.913063 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:31:51.921785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 13 20:31:51.926501 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 13 20:31:51.929097 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 13 20:31:51.929171 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 13 20:31:51.932239 systemd-resolved[1307]: Positive Trust Anchors:
Jan 13 20:31:51.932561 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 13 20:31:51.932596 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 13 20:31:51.940648 systemd-resolved[1307]: Defaulting to hostname 'linux'.
Jan 13 20:31:51.951008 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 13 20:31:51.952362 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 13 20:31:51.977438 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 13 20:31:51.979029 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Jan 13 20:31:51.981193 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 13 20:31:51.982813 systemd[1]: Reached target time-set.target - System Time Set.
Jan 13 20:31:51.990297 systemd-networkd[1393]: lo: Link UP
Jan 13 20:31:51.990305 systemd-networkd[1393]: lo: Gained carrier
Jan 13 20:31:51.993639 systemd-networkd[1393]: Enumeration completed
Jan 13 20:31:51.994295 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:31:51.994304 systemd-networkd[1393]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 13 20:31:51.994704 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 13 20:31:51.995033 systemd-networkd[1393]: eth0: Link UP
Jan 13 20:31:51.995042 systemd-networkd[1393]: eth0: Gained carrier
Jan 13 20:31:51.995056 systemd-networkd[1393]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 13 20:31:51.996241 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 13 20:31:51.998177 systemd[1]: Reached target network.target - Network.
Jan 13 20:31:52.006178 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 13 20:31:52.008682 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 13 20:31:52.011022 systemd-networkd[1393]: eth0: DHCPv4 address 10.0.0.151/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 13 20:31:52.012111 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection.
Jan 13 20:31:52.013262 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Jan 13 20:31:52.013315 systemd-timesyncd[1395]: Initial clock synchronization to Mon 2025-01-13 20:31:51.715814 UTC.
Jan 13 20:31:52.028101 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 13 20:31:52.028625 lvm[1427]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:31:52.066468 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 13 20:31:52.068012 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 13 20:31:52.071015 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 13 20:31:52.072114 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 13 20:31:52.073302 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 13 20:31:52.074652 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 13 20:31:52.075844 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 13 20:31:52.077212 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 13 20:31:52.078400 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 13 20:31:52.078434 systemd[1]: Reached target paths.target - Path Units.
Jan 13 20:31:52.079299 systemd[1]: Reached target timers.target - Timer Units.
Jan 13 20:31:52.080994 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 13 20:31:52.083286 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 13 20:31:52.090944 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 13 20:31:52.093125 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 13 20:31:52.094651 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 13 20:31:52.095832 systemd[1]: Reached target sockets.target - Socket Units.
Jan 13 20:31:52.096796 systemd[1]: Reached target basic.target - Basic System.
Jan 13 20:31:52.097810 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:31:52.097842 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 13 20:31:52.098766 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 13 20:31:52.100783 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 13 20:31:52.102363 lvm[1436]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 13 20:31:52.104081 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 13 20:31:52.110357 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 13 20:31:52.111551 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 13 20:31:52.114199 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 13 20:31:52.115473 jq[1439]: false
Jan 13 20:31:52.118985 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin...
Jan 13 20:31:52.124121 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 13 20:31:52.124574 extend-filesystems[1440]: Found loop3
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found loop4
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found loop5
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found vda
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found vda1
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found vda2
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found vda3
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found usr
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found vda4
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found vda6
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found vda7
Jan 13 20:31:52.126042 extend-filesystems[1440]: Found vda9
Jan 13 20:31:52.126042 extend-filesystems[1440]: Checking size of /dev/vda9
Jan 13 20:31:52.154027 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Jan 13 20:31:52.154060 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1363)
Jan 13 20:31:52.129600 dbus-daemon[1438]: [system] SELinux support is enabled
Jan 13 20:31:52.129270 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 13 20:31:52.154403 extend-filesystems[1440]: Resized partition /dev/vda9
Jan 13 20:31:52.135134 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 13 20:31:52.155593 extend-filesystems[1455]: resize2fs 1.47.1 (20-May-2024)
Jan 13 20:31:52.141226 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 13 20:31:52.141727 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 13 20:31:52.142584 systemd[1]: Starting update-engine.service - Update Engine...
Jan 13 20:31:52.154961 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 13 20:31:52.158391 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 13 20:31:52.163913 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 13 20:31:52.167339 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 13 20:31:52.167530 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 13 20:31:52.167799 systemd[1]: motdgen.service: Deactivated successfully.
Jan 13 20:31:52.167977 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 13 20:31:52.173296 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Jan 13 20:31:52.173551 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 13 20:31:52.173721 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 13 20:31:52.190240 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 13 20:31:52.202690 update_engine[1458]: I20250113 20:31:52.201541  1458 main.cc:92] Flatcar Update Engine starting
Jan 13 20:31:52.202841 jq[1461]: true
Jan 13 20:31:52.196513 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 13 20:31:52.196552 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 13 20:31:52.198103 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 13 20:31:52.204612 jq[1469]: true
Jan 13 20:31:52.198125 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 13 20:31:52.207630 systemd[1]: Started update-engine.service - Update Engine.
Jan 13 20:31:52.208650 update_engine[1458]: I20250113 20:31:52.207736  1458 update_check_scheduler.cc:74] Next update check in 8m6s
Jan 13 20:31:52.209127 tar[1464]: linux-arm64/helm
Jan 13 20:31:52.210609 extend-filesystems[1455]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Jan 13 20:31:52.210609 extend-filesystems[1455]: old_desc_blocks = 1, new_desc_blocks = 1
Jan 13 20:31:52.210609 extend-filesystems[1455]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Jan 13 20:31:52.219432 extend-filesystems[1440]: Resized filesystem in /dev/vda9
Jan 13 20:31:52.213140 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 13 20:31:52.216874 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 13 20:31:52.217053 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 13 20:31:52.220767 systemd-logind[1456]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 13 20:31:52.221459 systemd-logind[1456]: New seat seat0.
Jan 13 20:31:52.223029 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 13 20:31:52.277366 bash[1494]: Updated "/home/core/.ssh/authorized_keys"
Jan 13 20:31:52.276061 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 13 20:31:52.279488 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Jan 13 20:31:52.282062 locksmithd[1479]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 13 20:31:52.389090 containerd[1468]: time="2025-01-13T20:31:52.389013800Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23
Jan 13 20:31:52.417090 containerd[1468]: time="2025-01-13T20:31:52.416759640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:31:52.418163 containerd[1468]: time="2025-01-13T20:31:52.418129120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418219880Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418242160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418397960Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418415480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418468760Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418479800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418632560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418645880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418657960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418666600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418731840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419331 containerd[1468]: time="2025-01-13T20:31:52.418977400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419593 containerd[1468]: time="2025-01-13T20:31:52.419070680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 13 20:31:52.419593 containerd[1468]: time="2025-01-13T20:31:52.419084000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 13 20:31:52.419593 containerd[1468]: time="2025-01-13T20:31:52.419157480Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 13 20:31:52.419593 containerd[1468]: time="2025-01-13T20:31:52.419194880Z" level=info msg="metadata content store policy set" policy=shared
Jan 13 20:31:52.422909 containerd[1468]: time="2025-01-13T20:31:52.422878160Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 13 20:31:52.423051 containerd[1468]: time="2025-01-13T20:31:52.423034040Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 13 20:31:52.423122 containerd[1468]: time="2025-01-13T20:31:52.423095600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 13 20:31:52.423205 containerd[1468]: time="2025-01-13T20:31:52.423191240Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 13 20:31:52.423260 containerd[1468]: time="2025-01-13T20:31:52.423248440Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 13 20:31:52.423448 containerd[1468]: time="2025-01-13T20:31:52.423428120Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 13 20:31:52.423739 containerd[1468]: time="2025-01-13T20:31:52.423718960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 13 20:31:52.423904 containerd[1468]: time="2025-01-13T20:31:52.423875720Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 13 20:31:52.423997 containerd[1468]: time="2025-01-13T20:31:52.423982600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 13 20:31:52.424074 containerd[1468]: time="2025-01-13T20:31:52.424059000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 13 20:31:52.424129 containerd[1468]: time="2025-01-13T20:31:52.424117160Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 13 20:31:52.424177 containerd[1468]: time="2025-01-13T20:31:52.424166680Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 13 20:31:52.424224 containerd[1468]: time="2025-01-13T20:31:52.424213280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 13 20:31:52.424274 containerd[1468]: time="2025-01-13T20:31:52.424262640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 13 20:31:52.424324 containerd[1468]: time="2025-01-13T20:31:52.424313160Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 13 20:31:52.424373 containerd[1468]: time="2025-01-13T20:31:52.424362760Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 13 20:31:52.424449 containerd[1468]: time="2025-01-13T20:31:52.424435880Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 13 20:31:52.424499 containerd[1468]: time="2025-01-13T20:31:52.424488000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 13 20:31:52.424558 containerd[1468]: time="2025-01-13T20:31:52.424547240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.424608 containerd[1468]: time="2025-01-13T20:31:52.424597320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.424678 containerd[1468]: time="2025-01-13T20:31:52.424664600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.424740 containerd[1468]: time="2025-01-13T20:31:52.424726880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.424791 containerd[1468]: time="2025-01-13T20:31:52.424779000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.424849 containerd[1468]: time="2025-01-13T20:31:52.424831280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.424943 containerd[1468]: time="2025-01-13T20:31:52.424905240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425002 containerd[1468]: time="2025-01-13T20:31:52.424988920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425071 containerd[1468]: time="2025-01-13T20:31:52.425057400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425123 containerd[1468]: time="2025-01-13T20:31:52.425111760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425173 containerd[1468]: time="2025-01-13T20:31:52.425162040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425229 containerd[1468]: time="2025-01-13T20:31:52.425216720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425280 containerd[1468]: time="2025-01-13T20:31:52.425269360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425334 containerd[1468]: time="2025-01-13T20:31:52.425322960Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 13 20:31:52.425411 containerd[1468]: time="2025-01-13T20:31:52.425396760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425473 containerd[1468]: time="2025-01-13T20:31:52.425461040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.425521 containerd[1468]: time="2025-01-13T20:31:52.425509920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 13 20:31:52.425751 containerd[1468]: time="2025-01-13T20:31:52.425729600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 13 20:31:52.425819 containerd[1468]: time="2025-01-13T20:31:52.425804320Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 13 20:31:52.425868 containerd[1468]: time="2025-01-13T20:31:52.425856960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 13 20:31:52.425947 containerd[1468]: time="2025-01-13T20:31:52.425916600Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 13 20:31:52.426014 containerd[1468]: time="2025-01-13T20:31:52.425999960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.426065 containerd[1468]: time="2025-01-13T20:31:52.426053360Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 13 20:31:52.426109 containerd[1468]: time="2025-01-13T20:31:52.426098320Z" level=info msg="NRI interface is disabled by configuration."
Jan 13 20:31:52.426948 containerd[1468]: time="2025-01-13T20:31:52.426149640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 13 20:31:52.426994 containerd[1468]: time="2025-01-13T20:31:52.426502040Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 13 20:31:52.426994 containerd[1468]: time="2025-01-13T20:31:52.426548600Z" level=info msg="Connect containerd service"
Jan 13 20:31:52.426994 containerd[1468]: time="2025-01-13T20:31:52.426579760Z" level=info msg="using legacy CRI server"
Jan 13 20:31:52.426994 containerd[1468]: time="2025-01-13T20:31:52.426586560Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 13 20:31:52.426994 containerd[1468]: time="2025-01-13T20:31:52.426805120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 13 20:31:52.427825 containerd[1468]: time="2025-01-13T20:31:52.427762280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 13 20:31:52.428074 containerd[1468]: time="2025-01-13T20:31:52.428030480Z" level=info msg="Start subscribing containerd event"
Jan 13 20:31:52.428134 containerd[1468]: time="2025-01-13T20:31:52.428088040Z" level=info msg="Start recovering state"
Jan 13 20:31:52.428157 containerd[1468]: time="2025-01-13T20:31:52.428150800Z" level=info msg="Start event monitor"
Jan 13 20:31:52.428176 containerd[1468]: time="2025-01-13T20:31:52.428160840Z" level=info msg="Start snapshots syncer"
Jan 13 20:31:52.428176 containerd[1468]: time="2025-01-13T20:31:52.428170960Z" level=info msg="Start cni network conf syncer for default"
Jan 13 20:31:52.428209 containerd[1468]: time="2025-01-13T20:31:52.428179480Z" level=info msg="Start streaming server"
Jan 13 20:31:52.428644 containerd[1468]: time="2025-01-13T20:31:52.428621720Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 13 20:31:52.428747 containerd[1468]: time="2025-01-13T20:31:52.428732680Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 13 20:31:52.431422 systemd[1]: Started containerd.service - containerd container runtime.
Jan 13 20:31:52.432703 containerd[1468]: time="2025-01-13T20:31:52.432679080Z" level=info msg="containerd successfully booted in 0.045010s"
Jan 13 20:31:52.546499 tar[1464]: linux-arm64/LICENSE
Jan 13 20:31:52.546595 tar[1464]: linux-arm64/README.md
Jan 13 20:31:52.558963 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin.
Jan 13 20:31:52.565834 sshd_keygen[1462]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 13 20:31:52.584337 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 13 20:31:52.598161 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 13 20:31:52.603649 systemd[1]: issuegen.service: Deactivated successfully.
Jan 13 20:31:52.603882 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 13 20:31:52.608491 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 13 20:31:52.622258 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 13 20:31:52.625050 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 13 20:31:52.627135 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Jan 13 20:31:52.628433 systemd[1]: Reached target getty.target - Login Prompts.
Jan 13 20:31:53.337111 systemd-networkd[1393]: eth0: Gained IPv6LL
Jan 13 20:31:53.339054 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 13 20:31:53.341297 systemd[1]: Reached target network-online.target - Network is Online.
Jan 13 20:31:53.359242 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Jan 13 20:31:53.361809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:31:53.363939 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 13 20:31:53.378855 systemd[1]: coreos-metadata.service: Deactivated successfully.
Jan 13 20:31:53.379182 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Jan 13 20:31:53.380864 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 13 20:31:53.384063 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 13 20:31:53.838524 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:31:53.840123 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 13 20:31:53.843252 (kubelet)[1550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:31:53.845015 systemd[1]: Startup finished in 658ms (kernel) + 4.631s (initrd) + 3.504s (userspace) = 8.795s.
Jan 13 20:31:54.241030 kubelet[1550]: E0113 20:31:54.240917    1550 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:31:54.243564 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:31:54.243700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:31:57.962494 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 13 20:31:57.963579 systemd[1]: Started sshd@0-10.0.0.151:22-10.0.0.1:37636.service - OpenSSH per-connection server daemon (10.0.0.1:37636).
Jan 13 20:31:58.023015 sshd[1563]: Accepted publickey for core from 10.0.0.1 port 37636 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:31:58.024657 sshd-session[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:31:58.033293 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 13 20:31:58.040147 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 13 20:31:58.041662 systemd-logind[1456]: New session 1 of user core.
Jan 13 20:31:58.048972 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 13 20:31:58.051112 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 13 20:31:58.057374 (systemd)[1567]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 13 20:31:58.135184 systemd[1567]: Queued start job for default target default.target.
Jan 13 20:31:58.144858 systemd[1567]: Created slice app.slice - User Application Slice.
Jan 13 20:31:58.144910 systemd[1567]: Reached target paths.target - Paths.
Jan 13 20:31:58.144938 systemd[1567]: Reached target timers.target - Timers.
Jan 13 20:31:58.146182 systemd[1567]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 13 20:31:58.155672 systemd[1567]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 13 20:31:58.155740 systemd[1567]: Reached target sockets.target - Sockets.
Jan 13 20:31:58.155753 systemd[1567]: Reached target basic.target - Basic System.
Jan 13 20:31:58.155789 systemd[1567]: Reached target default.target - Main User Target.
Jan 13 20:31:58.155815 systemd[1567]: Startup finished in 93ms.
Jan 13 20:31:58.156136 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 13 20:31:58.157398 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 13 20:31:58.223238 systemd[1]: Started sshd@1-10.0.0.151:22-10.0.0.1:37648.service - OpenSSH per-connection server daemon (10.0.0.1:37648).
Jan 13 20:31:58.259332 sshd[1578]: Accepted publickey for core from 10.0.0.1 port 37648 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:31:58.260564 sshd-session[1578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:31:58.264814 systemd-logind[1456]: New session 2 of user core.
Jan 13 20:31:58.280075 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 13 20:31:58.330145 sshd[1580]: Connection closed by 10.0.0.1 port 37648
Jan 13 20:31:58.330590 sshd-session[1578]: pam_unix(sshd:session): session closed for user core
Jan 13 20:31:58.340195 systemd[1]: sshd@1-10.0.0.151:22-10.0.0.1:37648.service: Deactivated successfully.
Jan 13 20:31:58.341665 systemd[1]: session-2.scope: Deactivated successfully.
Jan 13 20:31:58.343137 systemd-logind[1456]: Session 2 logged out. Waiting for processes to exit.
Jan 13 20:31:58.344378 systemd[1]: Started sshd@2-10.0.0.151:22-10.0.0.1:37660.service - OpenSSH per-connection server daemon (10.0.0.1:37660).
Jan 13 20:31:58.345235 systemd-logind[1456]: Removed session 2.
Jan 13 20:31:58.384156 sshd[1585]: Accepted publickey for core from 10.0.0.1 port 37660 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:31:58.385356 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:31:58.389194 systemd-logind[1456]: New session 3 of user core.
Jan 13 20:31:58.401066 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 13 20:31:58.447228 sshd[1587]: Connection closed by 10.0.0.1 port 37660
Jan 13 20:31:58.447616 sshd-session[1585]: pam_unix(sshd:session): session closed for user core
Jan 13 20:31:58.462432 systemd[1]: sshd@2-10.0.0.151:22-10.0.0.1:37660.service: Deactivated successfully.
Jan 13 20:31:58.463882 systemd[1]: session-3.scope: Deactivated successfully.
Jan 13 20:31:58.468083 systemd-logind[1456]: Session 3 logged out. Waiting for processes to exit.
Jan 13 20:31:58.478275 systemd[1]: Started sshd@3-10.0.0.151:22-10.0.0.1:37676.service - OpenSSH per-connection server daemon (10.0.0.1:37676).
Jan 13 20:31:58.479200 systemd-logind[1456]: Removed session 3.
Jan 13 20:31:58.513597 sshd[1592]: Accepted publickey for core from 10.0.0.1 port 37676 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:31:58.514708 sshd-session[1592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:31:58.518546 systemd-logind[1456]: New session 4 of user core.
Jan 13 20:31:58.532095 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 13 20:31:58.582539 sshd[1594]: Connection closed by 10.0.0.1 port 37676
Jan 13 20:31:58.582941 sshd-session[1592]: pam_unix(sshd:session): session closed for user core
Jan 13 20:31:58.593159 systemd[1]: sshd@3-10.0.0.151:22-10.0.0.1:37676.service: Deactivated successfully.
Jan 13 20:31:58.594963 systemd[1]: session-4.scope: Deactivated successfully.
Jan 13 20:31:58.596096 systemd-logind[1456]: Session 4 logged out. Waiting for processes to exit.
Jan 13 20:31:58.597245 systemd[1]: Started sshd@4-10.0.0.151:22-10.0.0.1:37680.service - OpenSSH per-connection server daemon (10.0.0.1:37680).
Jan 13 20:31:58.598023 systemd-logind[1456]: Removed session 4.
Jan 13 20:31:58.636175 sshd[1599]: Accepted publickey for core from 10.0.0.1 port 37680 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:31:58.637367 sshd-session[1599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:31:58.641217 systemd-logind[1456]: New session 5 of user core.
Jan 13 20:31:58.649064 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 13 20:31:58.710891 sudo[1602]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jan 13 20:31:58.711175 sudo[1602]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:31:58.726689 sudo[1602]: pam_unix(sudo:session): session closed for user root
Jan 13 20:31:58.729744 sshd[1601]: Connection closed by 10.0.0.1 port 37680
Jan 13 20:31:58.730177 sshd-session[1599]: pam_unix(sshd:session): session closed for user core
Jan 13 20:31:58.745323 systemd[1]: sshd@4-10.0.0.151:22-10.0.0.1:37680.service: Deactivated successfully.
Jan 13 20:31:58.748189 systemd[1]: session-5.scope: Deactivated successfully.
Jan 13 20:31:58.750141 systemd-logind[1456]: Session 5 logged out. Waiting for processes to exit.
Jan 13 20:31:58.765189 systemd[1]: Started sshd@5-10.0.0.151:22-10.0.0.1:37690.service - OpenSSH per-connection server daemon (10.0.0.1:37690).
Jan 13 20:31:58.765990 systemd-logind[1456]: Removed session 5.
Jan 13 20:31:58.800538 sshd[1607]: Accepted publickey for core from 10.0.0.1 port 37690 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:31:58.801645 sshd-session[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:31:58.805601 systemd-logind[1456]: New session 6 of user core.
Jan 13 20:31:58.817050 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 13 20:31:58.867535 sudo[1611]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jan 13 20:31:58.867808 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:31:58.871210 sudo[1611]: pam_unix(sudo:session): session closed for user root
Jan 13 20:31:58.875720 sudo[1610]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules
Jan 13 20:31:58.876232 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:31:58.894282 systemd[1]: Starting audit-rules.service - Load Audit Rules...
Jan 13 20:31:58.915611 augenrules[1633]: No rules
Jan 13 20:31:58.916722 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 13 20:31:58.916938 systemd[1]: Finished audit-rules.service - Load Audit Rules.
Jan 13 20:31:58.918085 sudo[1610]: pam_unix(sudo:session): session closed for user root
Jan 13 20:31:58.919134 sshd[1609]: Connection closed by 10.0.0.1 port 37690
Jan 13 20:31:58.919491 sshd-session[1607]: pam_unix(sshd:session): session closed for user core
Jan 13 20:31:58.931106 systemd[1]: sshd@5-10.0.0.151:22-10.0.0.1:37690.service: Deactivated successfully.
Jan 13 20:31:58.932370 systemd[1]: session-6.scope: Deactivated successfully.
Jan 13 20:31:58.933985 systemd-logind[1456]: Session 6 logged out. Waiting for processes to exit.
Jan 13 20:31:58.939160 systemd[1]: Started sshd@6-10.0.0.151:22-10.0.0.1:37704.service - OpenSSH per-connection server daemon (10.0.0.1:37704).
Jan 13 20:31:58.939843 systemd-logind[1456]: Removed session 6.
Jan 13 20:31:58.974311 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 37704 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:31:58.975361 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:31:58.978975 systemd-logind[1456]: New session 7 of user core.
Jan 13 20:31:58.992072 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 13 20:31:59.041554 sudo[1644]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 13 20:31:59.041814 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 13 20:31:59.350346 (dockerd)[1665]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU
Jan 13 20:31:59.350376 systemd[1]: Starting docker.service - Docker Application Container Engine...
Jan 13 20:31:59.589044 dockerd[1665]: time="2025-01-13T20:31:59.588595382Z" level=info msg="Starting up"
Jan 13 20:31:59.720667 systemd[1]: var-lib-docker-metacopy\x2dcheck2151774686-merged.mount: Deactivated successfully.
Jan 13 20:31:59.727568 dockerd[1665]: time="2025-01-13T20:31:59.727523044Z" level=info msg="Loading containers: start."
Jan 13 20:31:59.860941 kernel: Initializing XFRM netlink socket
Jan 13 20:31:59.927735 systemd-networkd[1393]: docker0: Link UP
Jan 13 20:31:59.958154 dockerd[1665]: time="2025-01-13T20:31:59.958053918Z" level=info msg="Loading containers: done."
Jan 13 20:31:59.968433 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3170437313-merged.mount: Deactivated successfully.
Jan 13 20:31:59.970348 dockerd[1665]: time="2025-01-13T20:31:59.970301102Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2
Jan 13 20:31:59.970407 dockerd[1665]: time="2025-01-13T20:31:59.970391292Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1
Jan 13 20:31:59.970527 dockerd[1665]: time="2025-01-13T20:31:59.970497726Z" level=info msg="Daemon has completed initialization"
Jan 13 20:31:59.997128 dockerd[1665]: time="2025-01-13T20:31:59.996748103Z" level=info msg="API listen on /run/docker.sock"
Jan 13 20:31:59.996992 systemd[1]: Started docker.service - Docker Application Container Engine.
Jan 13 20:32:00.606787 containerd[1468]: time="2025-01-13T20:32:00.606731742Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\""
Jan 13 20:32:01.339818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3437856726.mount: Deactivated successfully.
Jan 13 20:32:04.494049 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
Jan 13 20:32:04.503140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:32:04.593007 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:32:04.596465 (kubelet)[1920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:32:04.630185 kubelet[1920]: E0113 20:32:04.630130    1920 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:32:04.632571 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:32:04.632692 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:32:08.505801 containerd[1468]: time="2025-01-13T20:32:08.505706531Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:08.506292 containerd[1468]: time="2025-01-13T20:32:08.506246847Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.4: active requests=0, bytes read=25615587"
Jan 13 20:32:08.506910 containerd[1468]: time="2025-01-13T20:32:08.506878065Z" level=info msg="ImageCreate event name:\"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:08.509955 containerd[1468]: time="2025-01-13T20:32:08.509903613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:08.511591 containerd[1468]: time="2025-01-13T20:32:08.511552302Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.4\" with image id \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e\", size \"25612385\" in 7.904781821s"
Jan 13 20:32:08.511629 containerd[1468]: time="2025-01-13T20:32:08.511594331Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:3e1123d6ebadbafa6eb77a9047f23f20befbbe2f177eb473a81b27a5de8c2ec5\""
Jan 13 20:32:08.512321 containerd[1468]: time="2025-01-13T20:32:08.512208674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\""
Jan 13 20:32:10.294461 containerd[1468]: time="2025-01-13T20:32:10.294412608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:10.294962 containerd[1468]: time="2025-01-13T20:32:10.294894710Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.4: active requests=0, bytes read=22470098"
Jan 13 20:32:10.295611 containerd[1468]: time="2025-01-13T20:32:10.295562695Z" level=info msg="ImageCreate event name:\"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:10.298789 containerd[1468]: time="2025-01-13T20:32:10.298750287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:10.299884 containerd[1468]: time="2025-01-13T20:32:10.299802949Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.4\" with image id \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b\", size \"23872417\" in 1.787558762s"
Jan 13 20:32:10.299884 containerd[1468]: time="2025-01-13T20:32:10.299837297Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:d5369864a42bf2c01d3ad462832526b7d3e40620c0e75fecefbffc203562ad55\""
Jan 13 20:32:10.300468 containerd[1468]: time="2025-01-13T20:32:10.300291346Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\""
Jan 13 20:32:11.955458 containerd[1468]: time="2025-01-13T20:32:11.955417159Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:11.956001 containerd[1468]: time="2025-01-13T20:32:11.955950640Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.4: active requests=0, bytes read=17024204"
Jan 13 20:32:11.956872 containerd[1468]: time="2025-01-13T20:32:11.956842673Z" level=info msg="ImageCreate event name:\"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:11.959739 containerd[1468]: time="2025-01-13T20:32:11.959708610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:11.961989 containerd[1468]: time="2025-01-13T20:32:11.961448345Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.4\" with image id \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd\", size \"18426541\" in 1.661126871s"
Jan 13 20:32:11.961989 containerd[1468]: time="2025-01-13T20:32:11.961483467Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:d99fc9a32f6b42ab5537eec09d599efae0f61c109406dae1ba255cec288fcb95\""
Jan 13 20:32:11.962389 containerd[1468]: time="2025-01-13T20:32:11.962290067Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\""
Jan 13 20:32:12.954906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3662485248.mount: Deactivated successfully.
Jan 13 20:32:13.267877 containerd[1468]: time="2025-01-13T20:32:13.267762761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.4\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:13.268883 containerd[1468]: time="2025-01-13T20:32:13.268665154Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.4: active requests=0, bytes read=26771428"
Jan 13 20:32:13.269622 containerd[1468]: time="2025-01-13T20:32:13.269589011Z" level=info msg="ImageCreate event name:\"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:13.271799 containerd[1468]: time="2025-01-13T20:32:13.271763323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:13.272686 containerd[1468]: time="2025-01-13T20:32:13.272572117Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.4\" with image id \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf\", size \"26770445\" in 1.310254056s"
Jan 13 20:32:13.272686 containerd[1468]: time="2025-01-13T20:32:13.272603516Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:34e142197cb996099cc1e98902c112642b3fb3dc559140c0a95279aa8d254d3a\""
Jan 13 20:32:13.273357 containerd[1468]: time="2025-01-13T20:32:13.273334112Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\""
Jan 13 20:32:13.842634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311811104.mount: Deactivated successfully.
Jan 13 20:32:14.732964 containerd[1468]: time="2025-01-13T20:32:14.732901001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:14.733512 containerd[1468]: time="2025-01-13T20:32:14.733466645Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383"
Jan 13 20:32:14.734251 containerd[1468]: time="2025-01-13T20:32:14.734222979Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:14.737301 containerd[1468]: time="2025-01-13T20:32:14.737258490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:14.738548 containerd[1468]: time="2025-01-13T20:32:14.738507513Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.465143515s"
Jan 13 20:32:14.738548 containerd[1468]: time="2025-01-13T20:32:14.738545188Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\""
Jan 13 20:32:14.739056 containerd[1468]: time="2025-01-13T20:32:14.739029415Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\""
Jan 13 20:32:14.883055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
Jan 13 20:32:14.893130 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:32:14.977586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:32:14.980710 (kubelet)[2001]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 13 20:32:15.011059 kubelet[2001]: E0113 20:32:15.010957    2001 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 13 20:32:15.013233 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 13 20:32:15.013374 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 13 20:32:15.228800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2969076557.mount: Deactivated successfully.
Jan 13 20:32:15.236428 containerd[1468]: time="2025-01-13T20:32:15.236390309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:15.237078 containerd[1468]: time="2025-01-13T20:32:15.237033400Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705"
Jan 13 20:32:15.237728 containerd[1468]: time="2025-01-13T20:32:15.237696012Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:15.239927 containerd[1468]: time="2025-01-13T20:32:15.239890521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:15.241440 containerd[1468]: time="2025-01-13T20:32:15.241354672Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 502.290972ms"
Jan 13 20:32:15.241440 containerd[1468]: time="2025-01-13T20:32:15.241387886Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\""
Jan 13 20:32:15.241985 containerd[1468]: time="2025-01-13T20:32:15.241900195Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\""
Jan 13 20:32:15.798532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount414807855.mount: Deactivated successfully.
Jan 13 20:32:19.273769 containerd[1468]: time="2025-01-13T20:32:19.273722416Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:19.274990 containerd[1468]: time="2025-01-13T20:32:19.274710674Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406427"
Jan 13 20:32:19.275777 containerd[1468]: time="2025-01-13T20:32:19.275724502Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:19.279452 containerd[1468]: time="2025-01-13T20:32:19.279407365Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:19.280947 containerd[1468]: time="2025-01-13T20:32:19.280776063Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 4.038833745s"
Jan 13 20:32:19.280947 containerd[1468]: time="2025-01-13T20:32:19.280808346Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\""
Jan 13 20:32:23.979048 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:32:23.989255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:32:24.010196 systemd[1]: Reloading requested from client PID 2096 ('systemctl') (unit session-7.scope)...
Jan 13 20:32:24.010213 systemd[1]: Reloading...
Jan 13 20:32:24.079955 zram_generator::config[2136]: No configuration found.
Jan 13 20:32:24.192140 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:32:24.242607 systemd[1]: Reloading finished in 232 ms.
Jan 13 20:32:24.285485 systemd[1]: kubelet.service: Deactivated successfully.
Jan 13 20:32:24.285698 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:32:24.287836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:32:24.381854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:32:24.385883 (kubelet)[2181]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 13 20:32:24.419637 kubelet[2181]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:32:24.419637 kubelet[2181]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 13 20:32:24.419637 kubelet[2181]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:32:24.419990 kubelet[2181]: I0113 20:32:24.419803    2181 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 13 20:32:26.087052 kubelet[2181]: I0113 20:32:26.087013    2181 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Jan 13 20:32:26.087720 kubelet[2181]: I0113 20:32:26.087375    2181 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 13 20:32:26.088020 kubelet[2181]: I0113 20:32:26.088001    2181 server.go:929] "Client rotation is on, will bootstrap in background"
Jan 13 20:32:26.166223 kubelet[2181]: I0113 20:32:26.166178    2181 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:32:26.166732 kubelet[2181]: E0113 20:32:26.166700    2181 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.151:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:32:26.175319 kubelet[2181]: E0113 20:32:26.175272    2181 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Jan 13 20:32:26.175319 kubelet[2181]: I0113 20:32:26.175308    2181 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Jan 13 20:32:26.178737 kubelet[2181]: I0113 20:32:26.178704    2181 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 13 20:32:26.179606 kubelet[2181]: I0113 20:32:26.179563    2181 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Jan 13 20:32:26.179778 kubelet[2181]: I0113 20:32:26.179737    2181 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 13 20:32:26.179965 kubelet[2181]: I0113 20:32:26.179767    2181 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Jan 13 20:32:26.180107 kubelet[2181]: I0113 20:32:26.180087    2181 topology_manager.go:138] "Creating topology manager with none policy"
Jan 13 20:32:26.180107 kubelet[2181]: I0113 20:32:26.180099    2181 container_manager_linux.go:300] "Creating device plugin manager"
Jan 13 20:32:26.180299 kubelet[2181]: I0113 20:32:26.180279    2181 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:32:26.181875 kubelet[2181]: I0113 20:32:26.181850    2181 kubelet.go:408] "Attempting to sync node with API server"
Jan 13 20:32:26.181910 kubelet[2181]: I0113 20:32:26.181879    2181 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 13 20:32:26.182561 kubelet[2181]: I0113 20:32:26.181975    2181 kubelet.go:314] "Adding apiserver pod source"
Jan 13 20:32:26.182561 kubelet[2181]: I0113 20:32:26.181991    2181 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 13 20:32:26.185208 kubelet[2181]: W0113 20:32:26.185158    2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused
Jan 13 20:32:26.186416 kubelet[2181]: E0113 20:32:26.185321    2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:32:26.186416 kubelet[2181]: W0113 20:32:26.185187    2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused
Jan 13 20:32:26.186416 kubelet[2181]: E0113 20:32:26.185365    2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:32:26.187197 kubelet[2181]: I0113 20:32:26.186990    2181 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 13 20:32:26.189283 kubelet[2181]: I0113 20:32:26.189263    2181 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 13 20:32:26.189947 kubelet[2181]: W0113 20:32:26.189912    2181 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 13 20:32:26.190671 kubelet[2181]: I0113 20:32:26.190618    2181 server.go:1269] "Started kubelet"
Jan 13 20:32:26.192431 kubelet[2181]: I0113 20:32:26.192349    2181 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 13 20:32:26.193157 kubelet[2181]: I0113 20:32:26.192770    2181 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 13 20:32:26.193157 kubelet[2181]: I0113 20:32:26.192810    2181 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 13 20:32:26.193157 kubelet[2181]: I0113 20:32:26.192895    2181 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 13 20:32:26.193656 kubelet[2181]: I0113 20:32:26.193485    2181 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Jan 13 20:32:26.194509 kubelet[2181]: E0113 20:32:26.193409    2181 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5ab680773b10  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:32:26.190592784 +0000 UTC m=+1.801819392,LastTimestamp:2025-01-13 20:32:26.190592784 +0000 UTC m=+1.801819392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Jan 13 20:32:26.194643 kubelet[2181]: I0113 20:32:26.194545    2181 server.go:460] "Adding debug handlers to kubelet server"
Jan 13 20:32:26.194968 kubelet[2181]: E0113 20:32:26.194942    2181 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 13 20:32:26.195152 kubelet[2181]: I0113 20:32:26.195134    2181 volume_manager.go:289] "Starting Kubelet Volume Manager"
Jan 13 20:32:26.196233 kubelet[2181]: I0113 20:32:26.195326    2181 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Jan 13 20:32:26.196233 kubelet[2181]: I0113 20:32:26.195420    2181 reconciler.go:26] "Reconciler: start to sync state"
Jan 13 20:32:26.196233 kubelet[2181]: E0113 20:32:26.195633    2181 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 13 20:32:26.196233 kubelet[2181]: I0113 20:32:26.195852    2181 factory.go:221] Registration of the systemd container factory successfully
Jan 13 20:32:26.196233 kubelet[2181]: W0113 20:32:26.195887    2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused
Jan 13 20:32:26.196233 kubelet[2181]: I0113 20:32:26.195943    2181 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 13 20:32:26.196233 kubelet[2181]: E0113 20:32:26.195950    2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:32:26.196233 kubelet[2181]: E0113 20:32:26.196191    2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="200ms"
Jan 13 20:32:26.197395 kubelet[2181]: I0113 20:32:26.197369    2181 factory.go:221] Registration of the containerd container factory successfully
Jan 13 20:32:26.208918 kubelet[2181]: I0113 20:32:26.208873    2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 13 20:32:26.209636 kubelet[2181]: I0113 20:32:26.209615    2181 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 13 20:32:26.209636 kubelet[2181]: I0113 20:32:26.209631    2181 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 13 20:32:26.209728 kubelet[2181]: I0113 20:32:26.209647    2181 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:32:26.210937 kubelet[2181]: I0113 20:32:26.210885    2181 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 13 20:32:26.210937 kubelet[2181]: I0113 20:32:26.210910    2181 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 13 20:32:26.210937 kubelet[2181]: I0113 20:32:26.210942    2181 kubelet.go:2321] "Starting kubelet main sync loop"
Jan 13 20:32:26.211111 kubelet[2181]: E0113 20:32:26.211087    2181 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 13 20:32:26.295328 kubelet[2181]: E0113 20:32:26.295283    2181 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 13 20:32:26.311623 kubelet[2181]: E0113 20:32:26.311591    2181 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Jan 13 20:32:26.367571 kubelet[2181]: I0113 20:32:26.367430    2181 policy_none.go:49] "None policy: Start"
Jan 13 20:32:26.367671 kubelet[2181]: W0113 20:32:26.367614    2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused
Jan 13 20:32:26.367709 kubelet[2181]: E0113 20:32:26.367681    2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.151:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:32:26.369123 kubelet[2181]: I0113 20:32:26.369104    2181 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 13 20:32:26.369286 kubelet[2181]: I0113 20:32:26.369192    2181 state_mem.go:35] "Initializing new in-memory state store"
Jan 13 20:32:26.376334 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 13 20:32:26.389465 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 13 20:32:26.392283 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 13 20:32:26.396371 kubelet[2181]: E0113 20:32:26.396339    2181 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 13 20:32:26.396716 kubelet[2181]: E0113 20:32:26.396677    2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="400ms"
Jan 13 20:32:26.402745 kubelet[2181]: I0113 20:32:26.402717    2181 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 13 20:32:26.402955 kubelet[2181]: I0113 20:32:26.402919    2181 eviction_manager.go:189] "Eviction manager: starting control loop"
Jan 13 20:32:26.402985 kubelet[2181]: I0113 20:32:26.402953    2181 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 13 20:32:26.403233 kubelet[2181]: I0113 20:32:26.403206    2181 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 13 20:32:26.404325 kubelet[2181]: E0113 20:32:26.404293    2181 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found"
Jan 13 20:32:26.504515 kubelet[2181]: I0113 20:32:26.504472    2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Jan 13 20:32:26.504981 kubelet[2181]: E0113 20:32:26.504937    2181 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost"
Jan 13 20:32:26.520855 systemd[1]: Created slice kubepods-burstable-pod06d50d3c297398a638a3e08e0f7bf5d8.slice - libcontainer container kubepods-burstable-pod06d50d3c297398a638a3e08e0f7bf5d8.slice.
Jan 13 20:32:26.531126 systemd[1]: Created slice kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice - libcontainer container kubepods-burstable-pod50a9ae38ddb3bec3278d8dc73a6a7009.slice.
Jan 13 20:32:26.534994 systemd[1]: Created slice kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice - libcontainer container kubepods-burstable-poda52b86ce975f496e6002ba953fa9b888.slice.
Jan 13 20:32:26.597404 kubelet[2181]: I0113 20:32:26.597343    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06d50d3c297398a638a3e08e0f7bf5d8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06d50d3c297398a638a3e08e0f7bf5d8\") " pod="kube-system/kube-apiserver-localhost"
Jan 13 20:32:26.597404 kubelet[2181]: I0113 20:32:26.597384    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:26.597404 kubelet[2181]: I0113 20:32:26.597407    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost"
Jan 13 20:32:26.597550 kubelet[2181]: I0113 20:32:26.597453    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06d50d3c297398a638a3e08e0f7bf5d8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06d50d3c297398a638a3e08e0f7bf5d8\") " pod="kube-system/kube-apiserver-localhost"
Jan 13 20:32:26.597550 kubelet[2181]: I0113 20:32:26.597493    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06d50d3c297398a638a3e08e0f7bf5d8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06d50d3c297398a638a3e08e0f7bf5d8\") " pod="kube-system/kube-apiserver-localhost"
Jan 13 20:32:26.597550 kubelet[2181]: I0113 20:32:26.597510    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:26.597550 kubelet[2181]: I0113 20:32:26.597524    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:26.597550 kubelet[2181]: I0113 20:32:26.597542    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:26.597660 kubelet[2181]: I0113 20:32:26.597556    2181 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:26.641635 kubelet[2181]: E0113 20:32:26.641489    2181 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.151:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.151:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181a5ab680773b10  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-13 20:32:26.190592784 +0000 UTC m=+1.801819392,LastTimestamp:2025-01-13 20:32:26.190592784 +0000 UTC m=+1.801819392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}"
Jan 13 20:32:26.706930 kubelet[2181]: I0113 20:32:26.706889    2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Jan 13 20:32:26.707344 kubelet[2181]: E0113 20:32:26.707301    2181 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost"
Jan 13 20:32:26.797919 kubelet[2181]: E0113 20:32:26.797868    2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="800ms"
Jan 13 20:32:26.830211 kubelet[2181]: E0113 20:32:26.830181    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:26.830868 containerd[1468]: time="2025-01-13T20:32:26.830820924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06d50d3c297398a638a3e08e0f7bf5d8,Namespace:kube-system,Attempt:0,}"
Jan 13 20:32:26.833999 kubelet[2181]: E0113 20:32:26.833968    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:26.834556 containerd[1468]: time="2025-01-13T20:32:26.834341618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,}"
Jan 13 20:32:26.836832 kubelet[2181]: E0113 20:32:26.836794    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:26.837159 containerd[1468]: time="2025-01-13T20:32:26.837133355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,}"
Jan 13 20:32:27.108769 kubelet[2181]: I0113 20:32:27.108673    2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Jan 13 20:32:27.109423 kubelet[2181]: E0113 20:32:27.109375    2181 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.151:6443/api/v1/nodes\": dial tcp 10.0.0.151:6443: connect: connection refused" node="localhost"
Jan 13 20:32:27.182098 kubelet[2181]: W0113 20:32:27.182027    2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused
Jan 13 20:32:27.182098 kubelet[2181]: E0113 20:32:27.182098    2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.151:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:32:27.293478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1485289243.mount: Deactivated successfully.
Jan 13 20:32:27.298343 containerd[1468]: time="2025-01-13T20:32:27.298296423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:32:27.299069 containerd[1468]: time="2025-01-13T20:32:27.299020416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Jan 13 20:32:27.299787 containerd[1468]: time="2025-01-13T20:32:27.299745449Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:32:27.303615 containerd[1468]: time="2025-01-13T20:32:27.303578874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:32:27.304722 containerd[1468]: time="2025-01-13T20:32:27.304676185Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:32:27.305433 containerd[1468]: time="2025-01-13T20:32:27.305401818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}  labels:{key:\"io.cri-containerd.pinned\"  value:\"pinned\"}"
Jan 13 20:32:27.306250 containerd[1468]: time="2025-01-13T20:32:27.306202338Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 13 20:32:27.306413 containerd[1468]: time="2025-01-13T20:32:27.306388117Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 471.986492ms"
Jan 13 20:32:27.307469 containerd[1468]: time="2025-01-13T20:32:27.307424341Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 13 20:32:27.310249 containerd[1468]: time="2025-01-13T20:32:27.310193859Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 473.005738ms"
Jan 13 20:32:27.314066 containerd[1468]: time="2025-01-13T20:32:27.314030245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.129033ms"
Jan 13 20:32:27.432054 containerd[1468]: time="2025-01-13T20:32:27.431819083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:27.432054 containerd[1468]: time="2025-01-13T20:32:27.431901291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:27.432342 containerd[1468]: time="2025-01-13T20:32:27.432294531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:27.433148 containerd[1468]: time="2025-01-13T20:32:27.432960757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:27.433148 containerd[1468]: time="2025-01-13T20:32:27.433016163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:27.433148 containerd[1468]: time="2025-01-13T20:32:27.433033605Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:27.433148 containerd[1468]: time="2025-01-13T20:32:27.433109892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:27.433363 containerd[1468]: time="2025-01-13T20:32:27.432748896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:27.435347 containerd[1468]: time="2025-01-13T20:32:27.435270630Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:27.435347 containerd[1468]: time="2025-01-13T20:32:27.435313954Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:27.435347 containerd[1468]: time="2025-01-13T20:32:27.435325315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:27.435453 containerd[1468]: time="2025-01-13T20:32:27.435383561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:27.455102 systemd[1]: Started cri-containerd-268b77675505713904da88a0cc013a73bb2804ada2f869c2053d3b9f080aa105.scope - libcontainer container 268b77675505713904da88a0cc013a73bb2804ada2f869c2053d3b9f080aa105.
Jan 13 20:32:27.456286 systemd[1]: Started cri-containerd-35fb2a22e1de263c2389e6db9aa90d6b24b1d16586c61c31614a4cd231f8f363.scope - libcontainer container 35fb2a22e1de263c2389e6db9aa90d6b24b1d16586c61c31614a4cd231f8f363.
Jan 13 20:32:27.458053 systemd[1]: Started cri-containerd-a37e190cb789a1f94858125196babfe92647a77929b51d43475d28e339df2de9.scope - libcontainer container a37e190cb789a1f94858125196babfe92647a77929b51d43475d28e339df2de9.
Jan 13 20:32:27.487825 containerd[1468]: time="2025-01-13T20:32:27.487768746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:06d50d3c297398a638a3e08e0f7bf5d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"a37e190cb789a1f94858125196babfe92647a77929b51d43475d28e339df2de9\""
Jan 13 20:32:27.488788 kubelet[2181]: E0113 20:32:27.488710    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:27.490586 containerd[1468]: time="2025-01-13T20:32:27.490555986Z" level=info msg="CreateContainer within sandbox \"a37e190cb789a1f94858125196babfe92647a77929b51d43475d28e339df2de9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}"
Jan 13 20:32:27.492891 containerd[1468]: time="2025-01-13T20:32:27.492620273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:50a9ae38ddb3bec3278d8dc73a6a7009,Namespace:kube-system,Attempt:0,} returns sandbox id \"268b77675505713904da88a0cc013a73bb2804ada2f869c2053d3b9f080aa105\""
Jan 13 20:32:27.493066 kubelet[2181]: W0113 20:32:27.493020    2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused
Jan 13 20:32:27.493498 kubelet[2181]: E0113 20:32:27.493081    2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.151:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:32:27.493498 kubelet[2181]: E0113 20:32:27.493275    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:27.495972 containerd[1468]: time="2025-01-13T20:32:27.495917605Z" level=info msg="CreateContainer within sandbox \"268b77675505713904da88a0cc013a73bb2804ada2f869c2053d3b9f080aa105\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}"
Jan 13 20:32:27.497394 containerd[1468]: time="2025-01-13T20:32:27.497315465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a52b86ce975f496e6002ba953fa9b888,Namespace:kube-system,Attempt:0,} returns sandbox id \"35fb2a22e1de263c2389e6db9aa90d6b24b1d16586c61c31614a4cd231f8f363\""
Jan 13 20:32:27.497852 kubelet[2181]: E0113 20:32:27.497815    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:27.500060 containerd[1468]: time="2025-01-13T20:32:27.500009576Z" level=info msg="CreateContainer within sandbox \"35fb2a22e1de263c2389e6db9aa90d6b24b1d16586c61c31614a4cd231f8f363\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}"
Jan 13 20:32:27.505811 containerd[1468]: time="2025-01-13T20:32:27.505770195Z" level=info msg="CreateContainer within sandbox \"a37e190cb789a1f94858125196babfe92647a77929b51d43475d28e339df2de9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92b1dfce3370ab7363fbd0aa10e2e4dcc60fa0b020d3558ba266af534f30aaa1\""
Jan 13 20:32:27.506948 containerd[1468]: time="2025-01-13T20:32:27.506828061Z" level=info msg="StartContainer for \"92b1dfce3370ab7363fbd0aa10e2e4dcc60fa0b020d3558ba266af534f30aaa1\""
Jan 13 20:32:27.511577 containerd[1468]: time="2025-01-13T20:32:27.511542735Z" level=info msg="CreateContainer within sandbox \"268b77675505713904da88a0cc013a73bb2804ada2f869c2053d3b9f080aa105\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"78f2f8ff16d926055f9749d3b36d1cdd0ff058f29e82b69e3e5a8d8a8501c550\""
Jan 13 20:32:27.511989 containerd[1468]: time="2025-01-13T20:32:27.511966698Z" level=info msg="StartContainer for \"78f2f8ff16d926055f9749d3b36d1cdd0ff058f29e82b69e3e5a8d8a8501c550\""
Jan 13 20:32:27.513859 containerd[1468]: time="2025-01-13T20:32:27.513754877Z" level=info msg="CreateContainer within sandbox \"35fb2a22e1de263c2389e6db9aa90d6b24b1d16586c61c31614a4cd231f8f363\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4915d3109fb9d8c781e69e224a0a207fb3eb600a1e832ce3b8852d8c0ed20101\""
Jan 13 20:32:27.514396 containerd[1468]: time="2025-01-13T20:32:27.514367259Z" level=info msg="StartContainer for \"4915d3109fb9d8c781e69e224a0a207fb3eb600a1e832ce3b8852d8c0ed20101\""
Jan 13 20:32:27.531136 systemd[1]: Started cri-containerd-92b1dfce3370ab7363fbd0aa10e2e4dcc60fa0b020d3558ba266af534f30aaa1.scope - libcontainer container 92b1dfce3370ab7363fbd0aa10e2e4dcc60fa0b020d3558ba266af534f30aaa1.
Jan 13 20:32:27.533719 systemd[1]: Started cri-containerd-78f2f8ff16d926055f9749d3b36d1cdd0ff058f29e82b69e3e5a8d8a8501c550.scope - libcontainer container 78f2f8ff16d926055f9749d3b36d1cdd0ff058f29e82b69e3e5a8d8a8501c550.
Jan 13 20:32:27.536067 systemd[1]: Started cri-containerd-4915d3109fb9d8c781e69e224a0a207fb3eb600a1e832ce3b8852d8c0ed20101.scope - libcontainer container 4915d3109fb9d8c781e69e224a0a207fb3eb600a1e832ce3b8852d8c0ed20101.
Jan 13 20:32:27.540142 kubelet[2181]: W0113 20:32:27.540044    2181 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.151:6443: connect: connection refused
Jan 13 20:32:27.540279 kubelet[2181]: E0113 20:32:27.540249    2181 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.151:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.151:6443: connect: connection refused" logger="UnhandledError"
Jan 13 20:32:27.574093 containerd[1468]: time="2025-01-13T20:32:27.574051457Z" level=info msg="StartContainer for \"78f2f8ff16d926055f9749d3b36d1cdd0ff058f29e82b69e3e5a8d8a8501c550\" returns successfully"
Jan 13 20:32:27.574503 containerd[1468]: time="2025-01-13T20:32:27.574249837Z" level=info msg="StartContainer for \"92b1dfce3370ab7363fbd0aa10e2e4dcc60fa0b020d3558ba266af534f30aaa1\" returns successfully"
Jan 13 20:32:27.593097 containerd[1468]: time="2025-01-13T20:32:27.593059528Z" level=info msg="StartContainer for \"4915d3109fb9d8c781e69e224a0a207fb3eb600a1e832ce3b8852d8c0ed20101\" returns successfully"
Jan 13 20:32:27.599018 kubelet[2181]: E0113 20:32:27.598964    2181 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.151:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.151:6443: connect: connection refused" interval="1.6s"
Jan 13 20:32:27.913199 kubelet[2181]: I0113 20:32:27.913035    2181 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Jan 13 20:32:28.227093 kubelet[2181]: E0113 20:32:28.226995    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:28.228229 kubelet[2181]: E0113 20:32:28.228202    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:28.229359 kubelet[2181]: E0113 20:32:28.229333    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:29.236547 kubelet[2181]: E0113 20:32:29.236511    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:29.482641 kubelet[2181]: E0113 20:32:29.482587    2181 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost"
Jan 13 20:32:29.492896 kubelet[2181]: E0113 20:32:29.492543    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:29.579545 kubelet[2181]: I0113 20:32:29.579493    2181 kubelet_node_status.go:75] "Successfully registered node" node="localhost"
Jan 13 20:32:29.692867 kubelet[2181]: E0113 20:32:29.692824    2181 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost"
Jan 13 20:32:29.693016 kubelet[2181]: E0113 20:32:29.692996    2181 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:30.186442 kubelet[2181]: I0113 20:32:30.186401    2181 apiserver.go:52] "Watching apiserver"
Jan 13 20:32:30.195876 kubelet[2181]: I0113 20:32:30.195836    2181 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Jan 13 20:32:31.352100 systemd[1]: Reloading requested from client PID 2458 ('systemctl') (unit session-7.scope)...
Jan 13 20:32:31.352117 systemd[1]: Reloading...
Jan 13 20:32:31.411961 zram_generator::config[2497]: No configuration found.
Jan 13 20:32:31.492385 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 13 20:32:31.555318 systemd[1]: Reloading finished in 202 ms.
Jan 13 20:32:31.588706 kubelet[2181]: I0113 20:32:31.588665    2181 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:32:31.589078 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:32:31.601240 systemd[1]: kubelet.service: Deactivated successfully.
Jan 13 20:32:31.601428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:32:31.601472 systemd[1]: kubelet.service: Consumed 2.206s CPU time, 114.8M memory peak, 0B memory swap peak.
Jan 13 20:32:31.613176 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 13 20:32:31.698065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 13 20:32:31.702359 (kubelet)[2539]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 13 20:32:31.735972 kubelet[2539]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:32:31.735972 kubelet[2539]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI.
Jan 13 20:32:31.735972 kubelet[2539]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 13 20:32:31.735972 kubelet[2539]: I0113 20:32:31.736050    2539 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 13 20:32:31.742246 kubelet[2539]: I0113 20:32:31.742200    2539 server.go:486] "Kubelet version" kubeletVersion="v1.31.0"
Jan 13 20:32:31.742246 kubelet[2539]: I0113 20:32:31.742231    2539 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 13 20:32:31.742488 kubelet[2539]: I0113 20:32:31.742453    2539 server.go:929] "Client rotation is on, will bootstrap in background"
Jan 13 20:32:31.743822 kubelet[2539]: I0113 20:32:31.743794    2539 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Jan 13 20:32:31.745915 kubelet[2539]: I0113 20:32:31.745880    2539 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 13 20:32:31.750261 kubelet[2539]: E0113 20:32:31.750163    2539 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Jan 13 20:32:31.750261 kubelet[2539]: I0113 20:32:31.750206    2539 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Jan 13 20:32:31.753289 kubelet[2539]: I0113 20:32:31.753260    2539 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 13 20:32:31.753391 kubelet[2539]: I0113 20:32:31.753378    2539 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority"
Jan 13 20:32:31.753497 kubelet[2539]: I0113 20:32:31.753473    2539 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 13 20:32:31.753653 kubelet[2539]: I0113 20:32:31.753498    2539 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Jan 13 20:32:31.753722 kubelet[2539]: I0113 20:32:31.753662    2539 topology_manager.go:138] "Creating topology manager with none policy"
Jan 13 20:32:31.753722 kubelet[2539]: I0113 20:32:31.753672    2539 container_manager_linux.go:300] "Creating device plugin manager"
Jan 13 20:32:31.753769 kubelet[2539]: I0113 20:32:31.753737    2539 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:32:31.753850 kubelet[2539]: I0113 20:32:31.753830    2539 kubelet.go:408] "Attempting to sync node with API server"
Jan 13 20:32:31.753850 kubelet[2539]: I0113 20:32:31.753844    2539 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 13 20:32:31.753900 kubelet[2539]: I0113 20:32:31.753858    2539 kubelet.go:314] "Adding apiserver pod source"
Jan 13 20:32:31.753900 kubelet[2539]: I0113 20:32:31.753880    2539 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 13 20:32:31.757045 kubelet[2539]: I0113 20:32:31.757018    2539 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1"
Jan 13 20:32:31.757502 kubelet[2539]: I0113 20:32:31.757486    2539 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 13 20:32:31.758980 kubelet[2539]: I0113 20:32:31.757886    2539 server.go:1269] "Started kubelet"
Jan 13 20:32:31.758980 kubelet[2539]: I0113 20:32:31.758143    2539 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
Jan 13 20:32:31.758980 kubelet[2539]: I0113 20:32:31.758208    2539 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 13 20:32:31.758980 kubelet[2539]: I0113 20:32:31.758430    2539 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 13 20:32:31.759179 kubelet[2539]: I0113 20:32:31.759163    2539 server.go:460] "Adding debug handlers to kubelet server"
Jan 13 20:32:31.761452 kubelet[2539]: I0113 20:32:31.761425    2539 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 13 20:32:31.763330 kubelet[2539]: I0113 20:32:31.761699    2539 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Jan 13 20:32:31.763330 kubelet[2539]: I0113 20:32:31.761748    2539 volume_manager.go:289] "Starting Kubelet Volume Manager"
Jan 13 20:32:31.763330 kubelet[2539]: I0113 20:32:31.761829    2539 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Jan 13 20:32:31.763330 kubelet[2539]: I0113 20:32:31.761952    2539 reconciler.go:26] "Reconciler: start to sync state"
Jan 13 20:32:31.763330 kubelet[2539]: E0113 20:32:31.762564    2539 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 13 20:32:31.763330 kubelet[2539]: E0113 20:32:31.762812    2539 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found"
Jan 13 20:32:31.764335 kubelet[2539]: I0113 20:32:31.764311    2539 factory.go:221] Registration of the systemd container factory successfully
Jan 13 20:32:31.764505 kubelet[2539]: I0113 20:32:31.764485    2539 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 13 20:32:31.781100 kubelet[2539]: I0113 20:32:31.781064    2539 factory.go:221] Registration of the containerd container factory successfully
Jan 13 20:32:31.785857 kubelet[2539]: I0113 20:32:31.785821    2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 13 20:32:31.787407 kubelet[2539]: I0113 20:32:31.787373    2539 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 13 20:32:31.787407 kubelet[2539]: I0113 20:32:31.787396    2539 status_manager.go:217] "Starting to sync pod status with apiserver"
Jan 13 20:32:31.787499 kubelet[2539]: I0113 20:32:31.787422    2539 kubelet.go:2321] "Starting kubelet main sync loop"
Jan 13 20:32:31.787499 kubelet[2539]: E0113 20:32:31.787462    2539 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Jan 13 20:32:31.814252 kubelet[2539]: I0113 20:32:31.814227    2539 cpu_manager.go:214] "Starting CPU manager" policy="none"
Jan 13 20:32:31.814427 kubelet[2539]: I0113 20:32:31.814412    2539 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
Jan 13 20:32:31.814498 kubelet[2539]: I0113 20:32:31.814487    2539 state_mem.go:36] "Initialized new in-memory state store"
Jan 13 20:32:31.814696 kubelet[2539]: I0113 20:32:31.814679    2539 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Jan 13 20:32:31.814765 kubelet[2539]: I0113 20:32:31.814742    2539 state_mem.go:96] "Updated CPUSet assignments" assignments={}
Jan 13 20:32:31.814811 kubelet[2539]: I0113 20:32:31.814802    2539 policy_none.go:49] "None policy: Start"
Jan 13 20:32:31.815512 kubelet[2539]: I0113 20:32:31.815492    2539 memory_manager.go:170] "Starting memorymanager" policy="None"
Jan 13 20:32:31.815512 kubelet[2539]: I0113 20:32:31.815516    2539 state_mem.go:35] "Initializing new in-memory state store"
Jan 13 20:32:31.815667 kubelet[2539]: I0113 20:32:31.815652    2539 state_mem.go:75] "Updated machine memory state"
Jan 13 20:32:31.821018 kubelet[2539]: I0113 20:32:31.820917    2539 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 13 20:32:31.821211 kubelet[2539]: I0113 20:32:31.821188    2539 eviction_manager.go:189] "Eviction manager: starting control loop"
Jan 13 20:32:31.821328 kubelet[2539]: I0113 20:32:31.821200    2539 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 13 20:32:31.822300 kubelet[2539]: I0113 20:32:31.821445    2539 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 13 20:32:31.924678 kubelet[2539]: I0113 20:32:31.924584    2539 kubelet_node_status.go:72] "Attempting to register node" node="localhost"
Jan 13 20:32:31.929682 kubelet[2539]: I0113 20:32:31.929654    2539 kubelet_node_status.go:111] "Node was previously registered" node="localhost"
Jan 13 20:32:31.929793 kubelet[2539]: I0113 20:32:31.929730    2539 kubelet_node_status.go:75] "Successfully registered node" node="localhost"
Jan 13 20:32:31.963310 kubelet[2539]: I0113 20:32:31.963279    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06d50d3c297398a638a3e08e0f7bf5d8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"06d50d3c297398a638a3e08e0f7bf5d8\") " pod="kube-system/kube-apiserver-localhost"
Jan 13 20:32:31.963310 kubelet[2539]: I0113 20:32:31.963318    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06d50d3c297398a638a3e08e0f7bf5d8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"06d50d3c297398a638a3e08e0f7bf5d8\") " pod="kube-system/kube-apiserver-localhost"
Jan 13 20:32:31.963310 kubelet[2539]: I0113 20:32:31.963339    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06d50d3c297398a638a3e08e0f7bf5d8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"06d50d3c297398a638a3e08e0f7bf5d8\") " pod="kube-system/kube-apiserver-localhost"
Jan 13 20:32:31.963310 kubelet[2539]: I0113 20:32:31.963360    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:31.963310 kubelet[2539]: I0113 20:32:31.963378    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:31.963562 kubelet[2539]: I0113 20:32:31.963396    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a52b86ce975f496e6002ba953fa9b888-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a52b86ce975f496e6002ba953fa9b888\") " pod="kube-system/kube-scheduler-localhost"
Jan 13 20:32:31.963562 kubelet[2539]: I0113 20:32:31.963410    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:31.963562 kubelet[2539]: I0113 20:32:31.963424    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:31.963562 kubelet[2539]: I0113 20:32:31.963439    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50a9ae38ddb3bec3278d8dc73a6a7009-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"50a9ae38ddb3bec3278d8dc73a6a7009\") " pod="kube-system/kube-controller-manager-localhost"
Jan 13 20:32:32.193533 kubelet[2539]: E0113 20:32:32.193369    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:32.194840 kubelet[2539]: E0113 20:32:32.194812    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:32.194940 kubelet[2539]: E0113 20:32:32.194814    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:32.754893 kubelet[2539]: I0113 20:32:32.754852    2539 apiserver.go:52] "Watching apiserver"
Jan 13 20:32:32.762131 kubelet[2539]: I0113 20:32:32.762097    2539 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
Jan 13 20:32:32.801582 kubelet[2539]: E0113 20:32:32.801540    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:32.801743 kubelet[2539]: E0113 20:32:32.801718    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:32.801743 kubelet[2539]: E0113 20:32:32.801738    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:32.828285 kubelet[2539]: I0113 20:32:32.828220    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.828203143 podStartE2EDuration="1.828203143s" podCreationTimestamp="2025-01-13 20:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:32.820867384 +0000 UTC m=+1.115732037" watchObservedRunningTime="2025-01-13 20:32:32.828203143 +0000 UTC m=+1.123067796"
Jan 13 20:32:32.828963 kubelet[2539]: I0113 20:32:32.828417    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.828409679 podStartE2EDuration="1.828409679s" podCreationTimestamp="2025-01-13 20:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:32.827420884 +0000 UTC m=+1.122285577" watchObservedRunningTime="2025-01-13 20:32:32.828409679 +0000 UTC m=+1.123274332"
Jan 13 20:32:32.845060 kubelet[2539]: I0113 20:32:32.844988    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.844972222 podStartE2EDuration="1.844972222s" podCreationTimestamp="2025-01-13 20:32:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:32.834906335 +0000 UTC m=+1.129770988" watchObservedRunningTime="2025-01-13 20:32:32.844972222 +0000 UTC m=+1.139836875"
Jan 13 20:32:33.802956 kubelet[2539]: E0113 20:32:33.802819    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:35.190294 kubelet[2539]: E0113 20:32:35.190257    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:36.371774 sudo[1644]: pam_unix(sudo:session): session closed for user root
Jan 13 20:32:36.372892 sshd[1643]: Connection closed by 10.0.0.1 port 37704
Jan 13 20:32:36.373333 sshd-session[1641]: pam_unix(sshd:session): session closed for user core
Jan 13 20:32:36.376503 systemd[1]: sshd@6-10.0.0.151:22-10.0.0.1:37704.service: Deactivated successfully.
Jan 13 20:32:36.378082 systemd[1]: session-7.scope: Deactivated successfully.
Jan 13 20:32:36.378229 systemd[1]: session-7.scope: Consumed 6.607s CPU time, 155.7M memory peak, 0B memory swap peak.
Jan 13 20:32:36.378670 systemd-logind[1456]: Session 7 logged out. Waiting for processes to exit.
Jan 13 20:32:36.380065 systemd-logind[1456]: Removed session 7.
Jan 13 20:32:37.051036 update_engine[1458]: I20250113 20:32:37.050966  1458 update_attempter.cc:509] Updating boot flags...
Jan 13 20:32:37.106951 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2642)
Jan 13 20:32:37.131949 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2646)
Jan 13 20:32:37.154057 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2646)
Jan 13 20:32:37.379355 kubelet[2539]: I0113 20:32:37.379199    2539 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24"
Jan 13 20:32:37.379776 kubelet[2539]: I0113 20:32:37.379692    2539 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24"
Jan 13 20:32:37.379811 containerd[1468]: time="2025-01-13T20:32:37.379531360Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 13 20:32:37.908156 kubelet[2539]: I0113 20:32:37.907872    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/40712327-c9fc-4dec-a5b3-431c51f518d9-kube-proxy\") pod \"kube-proxy-4qj6n\" (UID: \"40712327-c9fc-4dec-a5b3-431c51f518d9\") " pod="kube-system/kube-proxy-4qj6n"
Jan 13 20:32:37.908156 kubelet[2539]: I0113 20:32:37.907912    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40712327-c9fc-4dec-a5b3-431c51f518d9-lib-modules\") pod \"kube-proxy-4qj6n\" (UID: \"40712327-c9fc-4dec-a5b3-431c51f518d9\") " pod="kube-system/kube-proxy-4qj6n"
Jan 13 20:32:37.908156 kubelet[2539]: I0113 20:32:37.907949    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/40712327-c9fc-4dec-a5b3-431c51f518d9-xtables-lock\") pod \"kube-proxy-4qj6n\" (UID: \"40712327-c9fc-4dec-a5b3-431c51f518d9\") " pod="kube-system/kube-proxy-4qj6n"
Jan 13 20:32:37.908156 kubelet[2539]: I0113 20:32:37.907966    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7pk\" (UniqueName: \"kubernetes.io/projected/40712327-c9fc-4dec-a5b3-431c51f518d9-kube-api-access-sn7pk\") pod \"kube-proxy-4qj6n\" (UID: \"40712327-c9fc-4dec-a5b3-431c51f518d9\") " pod="kube-system/kube-proxy-4qj6n"
Jan 13 20:32:37.909366 systemd[1]: Created slice kubepods-besteffort-pod40712327_c9fc_4dec_a5b3_431c51f518d9.slice - libcontainer container kubepods-besteffort-pod40712327_c9fc_4dec_a5b3_431c51f518d9.slice.
Jan 13 20:32:38.015773 kubelet[2539]: E0113 20:32:38.015724    2539 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
Jan 13 20:32:38.015773 kubelet[2539]: E0113 20:32:38.015760    2539 projected.go:194] Error preparing data for projected volume kube-api-access-sn7pk for pod kube-system/kube-proxy-4qj6n: configmap "kube-root-ca.crt" not found
Jan 13 20:32:38.015941 kubelet[2539]: E0113 20:32:38.015814    2539 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/40712327-c9fc-4dec-a5b3-431c51f518d9-kube-api-access-sn7pk podName:40712327-c9fc-4dec-a5b3-431c51f518d9 nodeName:}" failed. No retries permitted until 2025-01-13 20:32:38.515795408 +0000 UTC m=+6.810660061 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sn7pk" (UniqueName: "kubernetes.io/projected/40712327-c9fc-4dec-a5b3-431c51f518d9-kube-api-access-sn7pk") pod "kube-proxy-4qj6n" (UID: "40712327-c9fc-4dec-a5b3-431c51f518d9") : configmap "kube-root-ca.crt" not found
Jan 13 20:32:38.424330 systemd[1]: Created slice kubepods-besteffort-pod3a4de990_3f92_473a_bdfd_12babdd3beeb.slice - libcontainer container kubepods-besteffort-pod3a4de990_3f92_473a_bdfd_12babdd3beeb.slice.
Jan 13 20:32:38.511856 kubelet[2539]: I0113 20:32:38.511775    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2ldn\" (UniqueName: \"kubernetes.io/projected/3a4de990-3f92-473a-bdfd-12babdd3beeb-kube-api-access-p2ldn\") pod \"tigera-operator-76c4976dd7-k2rzf\" (UID: \"3a4de990-3f92-473a-bdfd-12babdd3beeb\") " pod="tigera-operator/tigera-operator-76c4976dd7-k2rzf"
Jan 13 20:32:38.511856 kubelet[2539]: I0113 20:32:38.511813    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3a4de990-3f92-473a-bdfd-12babdd3beeb-var-lib-calico\") pod \"tigera-operator-76c4976dd7-k2rzf\" (UID: \"3a4de990-3f92-473a-bdfd-12babdd3beeb\") " pod="tigera-operator/tigera-operator-76c4976dd7-k2rzf"
Jan 13 20:32:38.727949 containerd[1468]: time="2025-01-13T20:32:38.727799403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-k2rzf,Uid:3a4de990-3f92-473a-bdfd-12babdd3beeb,Namespace:tigera-operator,Attempt:0,}"
Jan 13 20:32:38.746282 containerd[1468]: time="2025-01-13T20:32:38.746145946Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:38.746282 containerd[1468]: time="2025-01-13T20:32:38.746201229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:38.746282 containerd[1468]: time="2025-01-13T20:32:38.746216150Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:38.746785 containerd[1468]: time="2025-01-13T20:32:38.746379319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:38.765081 systemd[1]: Started cri-containerd-3261976da85b8c5427bb0af9adb88d06f7612fe05c2c1f1c2344c899dcf96ee4.scope - libcontainer container 3261976da85b8c5427bb0af9adb88d06f7612fe05c2c1f1c2344c899dcf96ee4.
Jan 13 20:32:38.788216 containerd[1468]: time="2025-01-13T20:32:38.788116207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-k2rzf,Uid:3a4de990-3f92-473a-bdfd-12babdd3beeb,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3261976da85b8c5427bb0af9adb88d06f7612fe05c2c1f1c2344c899dcf96ee4\""
Jan 13 20:32:38.790909 containerd[1468]: time="2025-01-13T20:32:38.790758114Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\""
Jan 13 20:32:38.818634 kubelet[2539]: E0113 20:32:38.818594    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:38.819150 containerd[1468]: time="2025-01-13T20:32:38.819113856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qj6n,Uid:40712327-c9fc-4dec-a5b3-431c51f518d9,Namespace:kube-system,Attempt:0,}"
Jan 13 20:32:38.836314 containerd[1468]: time="2025-01-13T20:32:38.836065882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:38.836314 containerd[1468]: time="2025-01-13T20:32:38.836118365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:38.836314 containerd[1468]: time="2025-01-13T20:32:38.836129525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:38.836314 containerd[1468]: time="2025-01-13T20:32:38.836201369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:38.856081 systemd[1]: Started cri-containerd-6b8b94c9f5380f7d571f8d2ebdf6932cf71c21a39d24eaf8e530a20f2c3211ad.scope - libcontainer container 6b8b94c9f5380f7d571f8d2ebdf6932cf71c21a39d24eaf8e530a20f2c3211ad.
Jan 13 20:32:38.871600 containerd[1468]: time="2025-01-13T20:32:38.871544581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4qj6n,Uid:40712327-c9fc-4dec-a5b3-431c51f518d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b8b94c9f5380f7d571f8d2ebdf6932cf71c21a39d24eaf8e530a20f2c3211ad\""
Jan 13 20:32:38.872532 kubelet[2539]: E0113 20:32:38.872366    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:38.874487 containerd[1468]: time="2025-01-13T20:32:38.874449063Z" level=info msg="CreateContainer within sandbox \"6b8b94c9f5380f7d571f8d2ebdf6932cf71c21a39d24eaf8e530a20f2c3211ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 13 20:32:38.886101 containerd[1468]: time="2025-01-13T20:32:38.886047950Z" level=info msg="CreateContainer within sandbox \"6b8b94c9f5380f7d571f8d2ebdf6932cf71c21a39d24eaf8e530a20f2c3211ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0c4e33d2adcee9fcc67789ded27af3691a8b72f3fd34eab9fb3163a405192599\""
Jan 13 20:32:38.886657 containerd[1468]: time="2025-01-13T20:32:38.886629902Z" level=info msg="StartContainer for \"0c4e33d2adcee9fcc67789ded27af3691a8b72f3fd34eab9fb3163a405192599\""
Jan 13 20:32:38.908112 systemd[1]: Started cri-containerd-0c4e33d2adcee9fcc67789ded27af3691a8b72f3fd34eab9fb3163a405192599.scope - libcontainer container 0c4e33d2adcee9fcc67789ded27af3691a8b72f3fd34eab9fb3163a405192599.
Jan 13 20:32:38.931490 containerd[1468]: time="2025-01-13T20:32:38.931442282Z" level=info msg="StartContainer for \"0c4e33d2adcee9fcc67789ded27af3691a8b72f3fd34eab9fb3163a405192599\" returns successfully"
Jan 13 20:32:39.811740 kubelet[2539]: E0113 20:32:39.811675    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:39.819950 kubelet[2539]: I0113 20:32:39.819874    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4qj6n" podStartSLOduration=2.819859942 podStartE2EDuration="2.819859942s" podCreationTimestamp="2025-01-13 20:32:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:39.819687733 +0000 UTC m=+8.114552386" watchObservedRunningTime="2025-01-13 20:32:39.819859942 +0000 UTC m=+8.114724555"
Jan 13 20:32:40.150279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2452025740.mount: Deactivated successfully.
Jan 13 20:32:40.351413 kubelet[2539]: E0113 20:32:40.351372    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:40.813181 kubelet[2539]: E0113 20:32:40.813100    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:41.502135 kubelet[2539]: E0113 20:32:41.502102    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:41.694963 containerd[1468]: time="2025-01-13T20:32:41.694900362Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:41.697365 containerd[1468]: time="2025-01-13T20:32:41.697327559Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125984"
Jan 13 20:32:41.698099 containerd[1468]: time="2025-01-13T20:32:41.698043834Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:41.700276 containerd[1468]: time="2025-01-13T20:32:41.700209138Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:41.700972 containerd[1468]: time="2025-01-13T20:32:41.700938813Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.910147537s"
Jan 13 20:32:41.701045 containerd[1468]: time="2025-01-13T20:32:41.700974095Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\""
Jan 13 20:32:41.704729 containerd[1468]: time="2025-01-13T20:32:41.704648272Z" level=info msg="CreateContainer within sandbox \"3261976da85b8c5427bb0af9adb88d06f7612fe05c2c1f1c2344c899dcf96ee4\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}"
Jan 13 20:32:41.714386 containerd[1468]: time="2025-01-13T20:32:41.714352739Z" level=info msg="CreateContainer within sandbox \"3261976da85b8c5427bb0af9adb88d06f7612fe05c2c1f1c2344c899dcf96ee4\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"adb4c61edc88c35008702b3a0ea679817974591bbff56e039b46a9ee2f6a8ad2\""
Jan 13 20:32:41.714780 containerd[1468]: time="2025-01-13T20:32:41.714716317Z" level=info msg="StartContainer for \"adb4c61edc88c35008702b3a0ea679817974591bbff56e039b46a9ee2f6a8ad2\""
Jan 13 20:32:41.743066 systemd[1]: Started cri-containerd-adb4c61edc88c35008702b3a0ea679817974591bbff56e039b46a9ee2f6a8ad2.scope - libcontainer container adb4c61edc88c35008702b3a0ea679817974591bbff56e039b46a9ee2f6a8ad2.
Jan 13 20:32:41.762003 containerd[1468]: time="2025-01-13T20:32:41.761859147Z" level=info msg="StartContainer for \"adb4c61edc88c35008702b3a0ea679817974591bbff56e039b46a9ee2f6a8ad2\" returns successfully"
Jan 13 20:32:41.816048 kubelet[2539]: E0113 20:32:41.816021    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:41.829106 kubelet[2539]: I0113 20:32:41.828956    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-k2rzf" podStartSLOduration=0.915273743 podStartE2EDuration="3.828939578s" podCreationTimestamp="2025-01-13 20:32:38 +0000 UTC" firstStartedPulling="2025-01-13 20:32:38.789543207 +0000 UTC m=+7.084407820" lastFinishedPulling="2025-01-13 20:32:41.703209002 +0000 UTC m=+9.998073655" observedRunningTime="2025-01-13 20:32:41.828865094 +0000 UTC m=+10.123729747" watchObservedRunningTime="2025-01-13 20:32:41.828939578 +0000 UTC m=+10.123804231"
Jan 13 20:32:42.817562 kubelet[2539]: E0113 20:32:42.817489    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:45.201810 kubelet[2539]: E0113 20:32:45.201780    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:45.749503 systemd[1]: Created slice kubepods-besteffort-pod8fa68604_379c_44e9_8992_5a41fb82b27b.slice - libcontainer container kubepods-besteffort-pod8fa68604_379c_44e9_8992_5a41fb82b27b.slice.
Jan 13 20:32:45.758295 kubelet[2539]: I0113 20:32:45.758243    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf7mp\" (UniqueName: \"kubernetes.io/projected/8fa68604-379c-44e9-8992-5a41fb82b27b-kube-api-access-pf7mp\") pod \"calico-typha-6fbccd4785-rjgpm\" (UID: \"8fa68604-379c-44e9-8992-5a41fb82b27b\") " pod="calico-system/calico-typha-6fbccd4785-rjgpm"
Jan 13 20:32:45.758295 kubelet[2539]: I0113 20:32:45.758296    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8fa68604-379c-44e9-8992-5a41fb82b27b-tigera-ca-bundle\") pod \"calico-typha-6fbccd4785-rjgpm\" (UID: \"8fa68604-379c-44e9-8992-5a41fb82b27b\") " pod="calico-system/calico-typha-6fbccd4785-rjgpm"
Jan 13 20:32:45.758463 kubelet[2539]: I0113 20:32:45.758315    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8fa68604-379c-44e9-8992-5a41fb82b27b-typha-certs\") pod \"calico-typha-6fbccd4785-rjgpm\" (UID: \"8fa68604-379c-44e9-8992-5a41fb82b27b\") " pod="calico-system/calico-typha-6fbccd4785-rjgpm"
Jan 13 20:32:45.799503 systemd[1]: Created slice kubepods-besteffort-pod394353f5_94cf_4d34_9b81_4e7ae346fd64.slice - libcontainer container kubepods-besteffort-pod394353f5_94cf_4d34_9b81_4e7ae346fd64.slice.
Jan 13 20:32:45.859089 kubelet[2539]: I0113 20:32:45.859043    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/394353f5-94cf-4d34-9b81-4e7ae346fd64-tigera-ca-bundle\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859089 kubelet[2539]: I0113 20:32:45.859088    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-var-lib-calico\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859261 kubelet[2539]: I0113 20:32:45.859119    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-policysync\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859261 kubelet[2539]: I0113 20:32:45.859139    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-lib-modules\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859261 kubelet[2539]: I0113 20:32:45.859158    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-xtables-lock\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859261 kubelet[2539]: I0113 20:32:45.859174    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-cni-bin-dir\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859261 kubelet[2539]: I0113 20:32:45.859187    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-cni-net-dir\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859372 kubelet[2539]: I0113 20:32:45.859223    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9r69\" (UniqueName: \"kubernetes.io/projected/394353f5-94cf-4d34-9b81-4e7ae346fd64-kube-api-access-q9r69\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859372 kubelet[2539]: I0113 20:32:45.859244    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-flexvol-driver-host\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859372 kubelet[2539]: I0113 20:32:45.859260    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-var-run-calico\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859372 kubelet[2539]: I0113 20:32:45.859275    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/394353f5-94cf-4d34-9b81-4e7ae346fd64-cni-log-dir\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.859372 kubelet[2539]: I0113 20:32:45.859290    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/394353f5-94cf-4d34-9b81-4e7ae346fd64-node-certs\") pod \"calico-node-mtrcp\" (UID: \"394353f5-94cf-4d34-9b81-4e7ae346fd64\") " pod="calico-system/calico-node-mtrcp"
Jan 13 20:32:45.897887 kubelet[2539]: E0113 20:32:45.897669    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mz55k" podUID="1ccd530c-1ce4-41fc-b0fc-1d9142439edd"
Jan 13 20:32:45.960875 kubelet[2539]: I0113 20:32:45.960070    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1ccd530c-1ce4-41fc-b0fc-1d9142439edd-kubelet-dir\") pod \"csi-node-driver-mz55k\" (UID: \"1ccd530c-1ce4-41fc-b0fc-1d9142439edd\") " pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:45.960875 kubelet[2539]: I0113 20:32:45.960146    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7dng\" (UniqueName: \"kubernetes.io/projected/1ccd530c-1ce4-41fc-b0fc-1d9142439edd-kube-api-access-c7dng\") pod \"csi-node-driver-mz55k\" (UID: \"1ccd530c-1ce4-41fc-b0fc-1d9142439edd\") " pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:45.960875 kubelet[2539]: I0113 20:32:45.960182    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1ccd530c-1ce4-41fc-b0fc-1d9142439edd-varrun\") pod \"csi-node-driver-mz55k\" (UID: \"1ccd530c-1ce4-41fc-b0fc-1d9142439edd\") " pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:45.960875 kubelet[2539]: I0113 20:32:45.960267    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1ccd530c-1ce4-41fc-b0fc-1d9142439edd-socket-dir\") pod \"csi-node-driver-mz55k\" (UID: \"1ccd530c-1ce4-41fc-b0fc-1d9142439edd\") " pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:45.960875 kubelet[2539]: I0113 20:32:45.960585    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1ccd530c-1ce4-41fc-b0fc-1d9142439edd-registration-dir\") pod \"csi-node-driver-mz55k\" (UID: \"1ccd530c-1ce4-41fc-b0fc-1d9142439edd\") " pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:45.970576 kubelet[2539]: E0113 20:32:45.970536    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:45.970576 kubelet[2539]: W0113 20:32:45.970564    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:45.970751 kubelet[2539]: E0113 20:32:45.970602    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:45.970937 kubelet[2539]: E0113 20:32:45.970905    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:45.970937 kubelet[2539]: W0113 20:32:45.970918    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:45.971006 kubelet[2539]: E0113 20:32:45.970942    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:45.972134 kubelet[2539]: E0113 20:32:45.972100    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:45.972134 kubelet[2539]: W0113 20:32:45.972120    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:45.972134 kubelet[2539]: E0113 20:32:45.972133    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.054987 kubelet[2539]: E0113 20:32:46.054747    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:46.057023 containerd[1468]: time="2025-01-13T20:32:46.056220008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fbccd4785-rjgpm,Uid:8fa68604-379c-44e9-8992-5a41fb82b27b,Namespace:calico-system,Attempt:0,}"
Jan 13 20:32:46.061417 kubelet[2539]: E0113 20:32:46.061323    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.061417 kubelet[2539]: W0113 20:32:46.061347    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.061417 kubelet[2539]: E0113 20:32:46.061367    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.062147 kubelet[2539]: E0113 20:32:46.061592    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.062147 kubelet[2539]: W0113 20:32:46.061603    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.062147 kubelet[2539]: E0113 20:32:46.061626    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.062147 kubelet[2539]: E0113 20:32:46.061819    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.062147 kubelet[2539]: W0113 20:32:46.061827    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.062147 kubelet[2539]: E0113 20:32:46.061844    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.062497 kubelet[2539]: E0113 20:32:46.062475    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.062497 kubelet[2539]: W0113 20:32:46.062494    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.062565 kubelet[2539]: E0113 20:32:46.062512    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.063338 kubelet[2539]: E0113 20:32:46.063308    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.063338 kubelet[2539]: W0113 20:32:46.063328    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.063582 kubelet[2539]: E0113 20:32:46.063478    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.065413 kubelet[2539]: E0113 20:32:46.064914    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.065413 kubelet[2539]: W0113 20:32:46.065036    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.065413 kubelet[2539]: E0113 20:32:46.065076    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.065874 kubelet[2539]: E0113 20:32:46.065840    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.065874 kubelet[2539]: W0113 20:32:46.065870    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.066403 kubelet[2539]: E0113 20:32:46.066186    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.068637 kubelet[2539]: E0113 20:32:46.067482    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.068637 kubelet[2539]: W0113 20:32:46.067509    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.068637 kubelet[2539]: E0113 20:32:46.067579    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.068637 kubelet[2539]: E0113 20:32:46.067780    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.068637 kubelet[2539]: W0113 20:32:46.067793    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.068637 kubelet[2539]: E0113 20:32:46.067864    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.068637 kubelet[2539]: E0113 20:32:46.068297    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.068637 kubelet[2539]: W0113 20:32:46.068313    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.068637 kubelet[2539]: E0113 20:32:46.068574    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.068917 kubelet[2539]: E0113 20:32:46.068769    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.068917 kubelet[2539]: W0113 20:32:46.068788    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.068984 kubelet[2539]: E0113 20:32:46.068971    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.069217 kubelet[2539]: E0113 20:32:46.069183    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.069217 kubelet[2539]: W0113 20:32:46.069206    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.069273 kubelet[2539]: E0113 20:32:46.069231    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.069382 kubelet[2539]: E0113 20:32:46.069369    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.069382 kubelet[2539]: W0113 20:32:46.069379    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.069433 kubelet[2539]: E0113 20:32:46.069398    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.069583 kubelet[2539]: E0113 20:32:46.069562    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.069583 kubelet[2539]: W0113 20:32:46.069574    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.069636 kubelet[2539]: E0113 20:32:46.069594    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.069760 kubelet[2539]: E0113 20:32:46.069745    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.069760 kubelet[2539]: W0113 20:32:46.069758    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.069814 kubelet[2539]: E0113 20:32:46.069773    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.070026 kubelet[2539]: E0113 20:32:46.070011    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.070059 kubelet[2539]: W0113 20:32:46.070025    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.070059 kubelet[2539]: E0113 20:32:46.070054    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.070269 kubelet[2539]: E0113 20:32:46.070242    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.070296 kubelet[2539]: W0113 20:32:46.070270    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.070296 kubelet[2539]: E0113 20:32:46.070289    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.070474 kubelet[2539]: E0113 20:32:46.070460    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.070498 kubelet[2539]: W0113 20:32:46.070473    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.070588 kubelet[2539]: E0113 20:32:46.070568    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.070660 kubelet[2539]: E0113 20:32:46.070649    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.070682 kubelet[2539]: W0113 20:32:46.070660    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.070713 kubelet[2539]: E0113 20:32:46.070682    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.071136 kubelet[2539]: E0113 20:32:46.070903    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.071136 kubelet[2539]: W0113 20:32:46.071035    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.071136 kubelet[2539]: E0113 20:32:46.071098    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.071859 kubelet[2539]: E0113 20:32:46.071832    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.071859 kubelet[2539]: W0113 20:32:46.071850    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.072204 kubelet[2539]: E0113 20:32:46.072185    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.072204 kubelet[2539]: W0113 20:32:46.072205    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.072285 kubelet[2539]: E0113 20:32:46.072222    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.072384 kubelet[2539]: E0113 20:32:46.072359    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.072617 kubelet[2539]: E0113 20:32:46.072595    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.072655 kubelet[2539]: W0113 20:32:46.072616    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.072736 kubelet[2539]: E0113 20:32:46.072714    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.072855 kubelet[2539]: E0113 20:32:46.072840    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.072883 kubelet[2539]: W0113 20:32:46.072869    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.072904 kubelet[2539]: E0113 20:32:46.072882    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.096066 kubelet[2539]: E0113 20:32:46.096035    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.096066 kubelet[2539]: W0113 20:32:46.096055    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.096198 kubelet[2539]: E0113 20:32:46.096075    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.102305 kubelet[2539]: E0113 20:32:46.102267    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:46.103219 containerd[1468]: time="2025-01-13T20:32:46.103064284Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtrcp,Uid:394353f5-94cf-4d34-9b81-4e7ae346fd64,Namespace:calico-system,Attempt:0,}"
Jan 13 20:32:46.111715 kubelet[2539]: E0113 20:32:46.111672    2539 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 13 20:32:46.111715 kubelet[2539]: W0113 20:32:46.111711    2539 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 13 20:32:46.111901 kubelet[2539]: E0113 20:32:46.111731    2539 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 13 20:32:46.141045 containerd[1468]: time="2025-01-13T20:32:46.140830092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:46.141045 containerd[1468]: time="2025-01-13T20:32:46.140904535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:46.141045 containerd[1468]: time="2025-01-13T20:32:46.140918255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:46.141323 containerd[1468]: time="2025-01-13T20:32:46.141057500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:46.148533 containerd[1468]: time="2025-01-13T20:32:46.148295178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:46.148748 containerd[1468]: time="2025-01-13T20:32:46.148515026Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:46.148748 containerd[1468]: time="2025-01-13T20:32:46.148564548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:46.150936 containerd[1468]: time="2025-01-13T20:32:46.150189410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:46.158089 systemd[1]: Started cri-containerd-45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0.scope - libcontainer container 45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0.
Jan 13 20:32:46.164176 systemd[1]: Started cri-containerd-fa1dc4c42c4624db52291b16f3f0974b1d6e18ba3cb8ac1ca8d1a91cae030201.scope - libcontainer container fa1dc4c42c4624db52291b16f3f0974b1d6e18ba3cb8ac1ca8d1a91cae030201.
Jan 13 20:32:46.203679 containerd[1468]: time="2025-01-13T20:32:46.203622059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtrcp,Uid:394353f5-94cf-4d34-9b81-4e7ae346fd64,Namespace:calico-system,Attempt:0,} returns sandbox id \"45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0\""
Jan 13 20:32:46.205142 kubelet[2539]: E0113 20:32:46.204821    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:46.206756 containerd[1468]: time="2025-01-13T20:32:46.206714457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Jan 13 20:32:46.228610 containerd[1468]: time="2025-01-13T20:32:46.228570895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6fbccd4785-rjgpm,Uid:8fa68604-379c-44e9-8992-5a41fb82b27b,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa1dc4c42c4624db52291b16f3f0974b1d6e18ba3cb8ac1ca8d1a91cae030201\""
Jan 13 20:32:46.229285 kubelet[2539]: E0113 20:32:46.229262    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:47.436284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158871378.mount: Deactivated successfully.
Jan 13 20:32:47.492688 containerd[1468]: time="2025-01-13T20:32:47.492644675Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:47.493644 containerd[1468]: time="2025-01-13T20:32:47.493109653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603"
Jan 13 20:32:47.494567 containerd[1468]: time="2025-01-13T20:32:47.494540905Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:47.497071 containerd[1468]: time="2025-01-13T20:32:47.497039517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:47.497714 containerd[1468]: time="2025-01-13T20:32:47.497574696Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.290818478s"
Jan 13 20:32:47.497714 containerd[1468]: time="2025-01-13T20:32:47.497606418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\""
Jan 13 20:32:47.499186 containerd[1468]: time="2025-01-13T20:32:47.498968828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\""
Jan 13 20:32:47.499837 containerd[1468]: time="2025-01-13T20:32:47.499707815Z" level=info msg="CreateContainer within sandbox \"45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Jan 13 20:32:47.511426 containerd[1468]: time="2025-01-13T20:32:47.511379283Z" level=info msg="CreateContainer within sandbox \"45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097\""
Jan 13 20:32:47.511998 containerd[1468]: time="2025-01-13T20:32:47.511813059Z" level=info msg="StartContainer for \"f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097\""
Jan 13 20:32:47.542094 systemd[1]: Started cri-containerd-f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097.scope - libcontainer container f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097.
Jan 13 20:32:47.566845 containerd[1468]: time="2025-01-13T20:32:47.564739763Z" level=info msg="StartContainer for \"f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097\" returns successfully"
Jan 13 20:32:47.591596 systemd[1]: cri-containerd-f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097.scope: Deactivated successfully.
Jan 13 20:32:47.625944 containerd[1468]: time="2025-01-13T20:32:47.619972351Z" level=info msg="shim disconnected" id=f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097 namespace=k8s.io
Jan 13 20:32:47.625944 containerd[1468]: time="2025-01-13T20:32:47.625943250Z" level=warning msg="cleaning up after shim disconnected" id=f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097 namespace=k8s.io
Jan 13 20:32:47.625944 containerd[1468]: time="2025-01-13T20:32:47.625956090Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:32:47.788030 kubelet[2539]: E0113 20:32:47.787951    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mz55k" podUID="1ccd530c-1ce4-41fc-b0fc-1d9142439edd"
Jan 13 20:32:47.829637 kubelet[2539]: E0113 20:32:47.829462    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:47.864560 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f045b489d3f02b050d7f4b5251fae3617b6a54570592e34acc8a2e798b311097-rootfs.mount: Deactivated successfully.
Jan 13 20:32:48.644423 containerd[1468]: time="2025-01-13T20:32:48.644364991Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:48.645055 containerd[1468]: time="2025-01-13T20:32:48.644972973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=27861516"
Jan 13 20:32:48.645677 containerd[1468]: time="2025-01-13T20:32:48.645626876Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:48.657696 containerd[1468]: time="2025-01-13T20:32:48.657649219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:48.659517 containerd[1468]: time="2025-01-13T20:32:48.659434442Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.160433692s"
Jan 13 20:32:48.659517 containerd[1468]: time="2025-01-13T20:32:48.659471243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\""
Jan 13 20:32:48.661406 containerd[1468]: time="2025-01-13T20:32:48.661198624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Jan 13 20:32:48.679728 containerd[1468]: time="2025-01-13T20:32:48.679682354Z" level=info msg="CreateContainer within sandbox \"fa1dc4c42c4624db52291b16f3f0974b1d6e18ba3cb8ac1ca8d1a91cae030201\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}"
Jan 13 20:32:48.691139 containerd[1468]: time="2025-01-13T20:32:48.691085476Z" level=info msg="CreateContainer within sandbox \"fa1dc4c42c4624db52291b16f3f0974b1d6e18ba3cb8ac1ca8d1a91cae030201\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3db354693aaee8e4a22497127b54324c8c76c945f5188134256514b1e2f425ab\""
Jan 13 20:32:48.691817 containerd[1468]: time="2025-01-13T20:32:48.691598494Z" level=info msg="StartContainer for \"3db354693aaee8e4a22497127b54324c8c76c945f5188134256514b1e2f425ab\""
Jan 13 20:32:48.718158 systemd[1]: Started cri-containerd-3db354693aaee8e4a22497127b54324c8c76c945f5188134256514b1e2f425ab.scope - libcontainer container 3db354693aaee8e4a22497127b54324c8c76c945f5188134256514b1e2f425ab.
Jan 13 20:32:48.750647 containerd[1468]: time="2025-01-13T20:32:48.750599251Z" level=info msg="StartContainer for \"3db354693aaee8e4a22497127b54324c8c76c945f5188134256514b1e2f425ab\" returns successfully"
Jan 13 20:32:48.892163 kubelet[2539]: E0113 20:32:48.888469    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:48.922201 kubelet[2539]: I0113 20:32:48.922037    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6fbccd4785-rjgpm" podStartSLOduration=1.492307026 podStartE2EDuration="3.922020166s" podCreationTimestamp="2025-01-13 20:32:45 +0000 UTC" firstStartedPulling="2025-01-13 20:32:46.230689136 +0000 UTC m=+14.525553789" lastFinishedPulling="2025-01-13 20:32:48.660402316 +0000 UTC m=+16.955266929" observedRunningTime="2025-01-13 20:32:48.921449826 +0000 UTC m=+17.216314479" watchObservedRunningTime="2025-01-13 20:32:48.922020166 +0000 UTC m=+17.216884819"
Jan 13 20:32:49.793864 kubelet[2539]: E0113 20:32:49.790589    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mz55k" podUID="1ccd530c-1ce4-41fc-b0fc-1d9142439edd"
Jan 13 20:32:49.871502 kubelet[2539]: E0113 20:32:49.871466    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:50.879376 kubelet[2539]: E0113 20:32:50.879335    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:51.788361 kubelet[2539]: E0113 20:32:51.788319    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mz55k" podUID="1ccd530c-1ce4-41fc-b0fc-1d9142439edd"
Jan 13 20:32:51.797917 containerd[1468]: time="2025-01-13T20:32:51.797865103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:51.798603 containerd[1468]: time="2025-01-13T20:32:51.798509723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123"
Jan 13 20:32:51.799248 containerd[1468]: time="2025-01-13T20:32:51.799212265Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:51.801776 containerd[1468]: time="2025-01-13T20:32:51.801745264Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:51.803211 containerd[1468]: time="2025-01-13T20:32:51.803173028Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 3.141941283s"
Jan 13 20:32:51.803211 containerd[1468]: time="2025-01-13T20:32:51.803208549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\""
Jan 13 20:32:51.806563 containerd[1468]: time="2025-01-13T20:32:51.806532213Z" level=info msg="CreateContainer within sandbox \"45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Jan 13 20:32:51.817380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1132995896.mount: Deactivated successfully.
Jan 13 20:32:51.819416 containerd[1468]: time="2025-01-13T20:32:51.819381494Z" level=info msg="CreateContainer within sandbox \"45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e\""
Jan 13 20:32:51.822199 containerd[1468]: time="2025-01-13T20:32:51.822172181Z" level=info msg="StartContainer for \"98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e\""
Jan 13 20:32:51.861099 systemd[1]: Started cri-containerd-98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e.scope - libcontainer container 98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e.
Jan 13 20:32:51.888401 containerd[1468]: time="2025-01-13T20:32:51.888281165Z" level=info msg="StartContainer for \"98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e\" returns successfully"
Jan 13 20:32:52.388864 systemd[1]: cri-containerd-98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e.scope: Deactivated successfully.
Jan 13 20:32:52.405388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e-rootfs.mount: Deactivated successfully.
Jan 13 20:32:52.459435 kubelet[2539]: I0113 20:32:52.459404    2539 kubelet_node_status.go:488] "Fast updating node status as it just became ready"
Jan 13 20:32:52.498962 containerd[1468]: time="2025-01-13T20:32:52.498779717Z" level=info msg="shim disconnected" id=98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e namespace=k8s.io
Jan 13 20:32:52.498962 containerd[1468]: time="2025-01-13T20:32:52.498834159Z" level=warning msg="cleaning up after shim disconnected" id=98cf483d83e8ddb0317a7d64b9fe77313d7c037333790a4dfffaf1d67979590e namespace=k8s.io
Jan 13 20:32:52.498962 containerd[1468]: time="2025-01-13T20:32:52.498850640Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 13 20:32:52.523734 systemd[1]: Created slice kubepods-besteffort-pod45eae7c1_df7d_4875_9c9e_bd5fcfaa32a1.slice - libcontainer container kubepods-besteffort-pod45eae7c1_df7d_4875_9c9e_bd5fcfaa32a1.slice.
Jan 13 20:32:52.533121 systemd[1]: Created slice kubepods-besteffort-pod5c636eda_04d8_4d14_8cff_128a25fb05c4.slice - libcontainer container kubepods-besteffort-pod5c636eda_04d8_4d14_8cff_128a25fb05c4.slice.
Jan 13 20:32:52.538533 systemd[1]: Created slice kubepods-burstable-pod94b6ed52_6d9b_4b44_8661_86ed9c610caa.slice - libcontainer container kubepods-burstable-pod94b6ed52_6d9b_4b44_8661_86ed9c610caa.slice.
Jan 13 20:32:52.543713 kubelet[2539]: I0113 20:32:52.542941    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjrcw\" (UniqueName: \"kubernetes.io/projected/5c636eda-04d8-4d14-8cff-128a25fb05c4-kube-api-access-fjrcw\") pod \"calico-apiserver-69b5874dc7-m22rh\" (UID: \"5c636eda-04d8-4d14-8cff-128a25fb05c4\") " pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:52.543713 kubelet[2539]: I0113 20:32:52.542986    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1-tigera-ca-bundle\") pod \"calico-kube-controllers-8574fbbb74-f4267\" (UID: \"45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1\") " pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:52.543713 kubelet[2539]: I0113 20:32:52.543004    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdrjp\" (UniqueName: \"kubernetes.io/projected/020d487a-592d-466b-b805-5233e1a92845-kube-api-access-fdrjp\") pod \"calico-apiserver-69b5874dc7-pn7tk\" (UID: \"020d487a-592d-466b-b805-5233e1a92845\") " pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:52.543713 kubelet[2539]: I0113 20:32:52.543023    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fv6d\" (UniqueName: \"kubernetes.io/projected/956a79b0-4c28-4303-8930-015d57ee6a8d-kube-api-access-5fv6d\") pod \"coredns-6f6b679f8f-gfjnm\" (UID: \"956a79b0-4c28-4303-8930-015d57ee6a8d\") " pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:52.543713 kubelet[2539]: I0113 20:32:52.543038    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/94b6ed52-6d9b-4b44-8661-86ed9c610caa-config-volume\") pod \"coredns-6f6b679f8f-2rhs8\" (UID: \"94b6ed52-6d9b-4b44-8661-86ed9c610caa\") " pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:52.543570 systemd[1]: Created slice kubepods-besteffort-pod020d487a_592d_466b_b805_5233e1a92845.slice - libcontainer container kubepods-besteffort-pod020d487a_592d_466b_b805_5233e1a92845.slice.
Jan 13 20:32:52.543980 kubelet[2539]: I0113 20:32:52.543060    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndgmm\" (UniqueName: \"kubernetes.io/projected/45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1-kube-api-access-ndgmm\") pod \"calico-kube-controllers-8574fbbb74-f4267\" (UID: \"45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1\") " pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:52.543980 kubelet[2539]: I0113 20:32:52.543078    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c636eda-04d8-4d14-8cff-128a25fb05c4-calico-apiserver-certs\") pod \"calico-apiserver-69b5874dc7-m22rh\" (UID: \"5c636eda-04d8-4d14-8cff-128a25fb05c4\") " pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:52.543980 kubelet[2539]: I0113 20:32:52.543099    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/020d487a-592d-466b-b805-5233e1a92845-calico-apiserver-certs\") pod \"calico-apiserver-69b5874dc7-pn7tk\" (UID: \"020d487a-592d-466b-b805-5233e1a92845\") " pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:52.543980 kubelet[2539]: I0113 20:32:52.543114    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/956a79b0-4c28-4303-8930-015d57ee6a8d-config-volume\") pod \"coredns-6f6b679f8f-gfjnm\" (UID: \"956a79b0-4c28-4303-8930-015d57ee6a8d\") " pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:52.543980 kubelet[2539]: I0113 20:32:52.543130    2539 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmjnp\" (UniqueName: \"kubernetes.io/projected/94b6ed52-6d9b-4b44-8661-86ed9c610caa-kube-api-access-qmjnp\") pod \"coredns-6f6b679f8f-2rhs8\" (UID: \"94b6ed52-6d9b-4b44-8661-86ed9c610caa\") " pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:52.550416 systemd[1]: Created slice kubepods-burstable-pod956a79b0_4c28_4303_8930_015d57ee6a8d.slice - libcontainer container kubepods-burstable-pod956a79b0_4c28_4303_8930_015d57ee6a8d.slice.
Jan 13 20:32:52.829983 containerd[1468]: time="2025-01-13T20:32:52.829912706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:0,}"
Jan 13 20:32:52.835718 containerd[1468]: time="2025-01-13T20:32:52.835675479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:0,}"
Jan 13 20:32:52.842627 kubelet[2539]: E0113 20:32:52.842583    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:52.843106 containerd[1468]: time="2025-01-13T20:32:52.843064541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:0,}"
Jan 13 20:32:52.846599 containerd[1468]: time="2025-01-13T20:32:52.846567486Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:0,}"
Jan 13 20:32:52.852389 kubelet[2539]: E0113 20:32:52.852356    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:52.853049 containerd[1468]: time="2025-01-13T20:32:52.852770192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:0,}"
Jan 13 20:32:52.889873 kubelet[2539]: E0113 20:32:52.889813    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:52.914140 containerd[1468]: time="2025-01-13T20:32:52.913243649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Jan 13 20:32:53.183709 containerd[1468]: time="2025-01-13T20:32:53.183579771Z" level=error msg="Failed to destroy network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.184087 containerd[1468]: time="2025-01-13T20:32:53.184053584Z" level=error msg="encountered an error cleaning up failed sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.184175 containerd[1468]: time="2025-01-13T20:32:53.184118146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.184688 containerd[1468]: time="2025-01-13T20:32:53.184588120Z" level=error msg="Failed to destroy network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.184939 containerd[1468]: time="2025-01-13T20:32:53.184903889Z" level=error msg="encountered an error cleaning up failed sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.184993 containerd[1468]: time="2025-01-13T20:32:53.184970771Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.186164 kubelet[2539]: E0113 20:32:53.186105    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.186513 kubelet[2539]: E0113 20:32:53.186189    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:53.186513 kubelet[2539]: E0113 20:32:53.186504    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:53.186597 kubelet[2539]: E0113 20:32:53.186559    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2rhs8_kube-system(94b6ed52-6d9b-4b44-8661-86ed9c610caa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2rhs8_kube-system(94b6ed52-6d9b-4b44-8661-86ed9c610caa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2rhs8" podUID="94b6ed52-6d9b-4b44-8661-86ed9c610caa"
Jan 13 20:32:53.188648 kubelet[2539]: E0113 20:32:53.188559    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.188648 kubelet[2539]: E0113 20:32:53.188615    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:53.188648 kubelet[2539]: E0113 20:32:53.188633    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:53.188886 kubelet[2539]: E0113 20:32:53.188670    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b5874dc7-pn7tk_calico-apiserver(020d487a-592d-466b-b805-5233e1a92845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b5874dc7-pn7tk_calico-apiserver(020d487a-592d-466b-b805-5233e1a92845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk" podUID="020d487a-592d-466b-b805-5233e1a92845"
Jan 13 20:32:53.191757 containerd[1468]: time="2025-01-13T20:32:53.191715806Z" level=error msg="Failed to destroy network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.192243 containerd[1468]: time="2025-01-13T20:32:53.192200540Z" level=error msg="encountered an error cleaning up failed sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.192301 containerd[1468]: time="2025-01-13T20:32:53.192262382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.193204 kubelet[2539]: E0113 20:32:53.192448    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.193204 kubelet[2539]: E0113 20:32:53.192492    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:53.193204 kubelet[2539]: E0113 20:32:53.192510    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:53.193408 kubelet[2539]: E0113 20:32:53.192546    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8574fbbb74-f4267_calico-system(45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8574fbbb74-f4267_calico-system(45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267" podUID="45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1"
Jan 13 20:32:53.195251 containerd[1468]: time="2025-01-13T20:32:53.195210587Z" level=error msg="Failed to destroy network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.195564 containerd[1468]: time="2025-01-13T20:32:53.195526917Z" level=error msg="encountered an error cleaning up failed sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.195620 containerd[1468]: time="2025-01-13T20:32:53.195576318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.195893 kubelet[2539]: E0113 20:32:53.195842    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.195981 kubelet[2539]: E0113 20:32:53.195905    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:53.195981 kubelet[2539]: E0113 20:32:53.195954    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:53.196047 kubelet[2539]: E0113 20:32:53.195994    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gfjnm_kube-system(956a79b0-4c28-4303-8930-015d57ee6a8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gfjnm_kube-system(956a79b0-4c28-4303-8930-015d57ee6a8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gfjnm" podUID="956a79b0-4c28-4303-8930-015d57ee6a8d"
Jan 13 20:32:53.203498 containerd[1468]: time="2025-01-13T20:32:53.203443866Z" level=error msg="Failed to destroy network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.203741 containerd[1468]: time="2025-01-13T20:32:53.203716034Z" level=error msg="encountered an error cleaning up failed sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.203779 containerd[1468]: time="2025-01-13T20:32:53.203765235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.203984 kubelet[2539]: E0113 20:32:53.203941    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.204021 kubelet[2539]: E0113 20:32:53.204004    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:53.204043 kubelet[2539]: E0113 20:32:53.204024    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:53.204084 kubelet[2539]: E0113 20:32:53.204061    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b5874dc7-m22rh_calico-apiserver(5c636eda-04d8-4d14-8cff-128a25fb05c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b5874dc7-m22rh_calico-apiserver(5c636eda-04d8-4d14-8cff-128a25fb05c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh" podUID="5c636eda-04d8-4d14-8cff-128a25fb05c4"
Jan 13 20:32:53.793799 systemd[1]: Created slice kubepods-besteffort-pod1ccd530c_1ce4_41fc_b0fc_1d9142439edd.slice - libcontainer container kubepods-besteffort-pod1ccd530c_1ce4_41fc_b0fc_1d9142439edd.slice.
Jan 13 20:32:53.795899 containerd[1468]: time="2025-01-13T20:32:53.795860095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:0,}"
Jan 13 20:32:53.817232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9-shm.mount: Deactivated successfully.
Jan 13 20:32:53.817318 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e-shm.mount: Deactivated successfully.
Jan 13 20:32:53.817371 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c-shm.mount: Deactivated successfully.
Jan 13 20:32:53.844755 containerd[1468]: time="2025-01-13T20:32:53.844703469Z" level=error msg="Failed to destroy network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.845900 containerd[1468]: time="2025-01-13T20:32:53.845804460Z" level=error msg="encountered an error cleaning up failed sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.845900 containerd[1468]: time="2025-01-13T20:32:53.845861582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.847168 kubelet[2539]: E0113 20:32:53.846186    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:53.847168 kubelet[2539]: E0113 20:32:53.846255    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:53.847168 kubelet[2539]: E0113 20:32:53.846273    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:53.846518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c-shm.mount: Deactivated successfully.
Jan 13 20:32:53.847519 kubelet[2539]: E0113 20:32:53.846310    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mz55k_calico-system(1ccd530c-1ce4-41fc-b0fc-1d9142439edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mz55k_calico-system(1ccd530c-1ce4-41fc-b0fc-1d9142439edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mz55k" podUID="1ccd530c-1ce4-41fc-b0fc-1d9142439edd"
Jan 13 20:32:53.896430 kubelet[2539]: I0113 20:32:53.895496    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9"
Jan 13 20:32:53.896691 containerd[1468]: time="2025-01-13T20:32:53.896536649Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\""
Jan 13 20:32:53.901291 kubelet[2539]: I0113 20:32:53.897795    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c"
Jan 13 20:32:53.901375 containerd[1468]: time="2025-01-13T20:32:53.898346621Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\""
Jan 13 20:32:53.901375 containerd[1468]: time="2025-01-13T20:32:53.899001160Z" level=info msg="Ensure that sandbox 2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c in task-service has been cleanup successfully"
Jan 13 20:32:53.901375 containerd[1468]: time="2025-01-13T20:32:53.901235345Z" level=info msg="TearDown network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" successfully"
Jan 13 20:32:53.901375 containerd[1468]: time="2025-01-13T20:32:53.901258146Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" returns successfully"
Jan 13 20:32:53.903566 systemd[1]: run-netns-cni\x2d8a445c0a\x2d807a\x2d4a0d\x2db77c\x2d4e35dd77ca01.mount: Deactivated successfully.
Jan 13 20:32:53.906650 kubelet[2539]: I0113 20:32:53.905623    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c"
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.903606694Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:1,}"
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.905003214Z" level=info msg="Ensure that sandbox af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9 in task-service has been cleanup successfully"
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.905229141Z" level=info msg="TearDown network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" successfully"
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.905244101Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" returns successfully"
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.905679794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:1,}"
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.906110446Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\""
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.906239090Z" level=info msg="Ensure that sandbox d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c in task-service has been cleanup successfully"
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.906454856Z" level=info msg="TearDown network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" successfully"
Jan 13 20:32:53.906696 containerd[1468]: time="2025-01-13T20:32:53.906469976Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" returns successfully"
Jan 13 20:32:53.907809 containerd[1468]: time="2025-01-13T20:32:53.907626290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:1,}"
Jan 13 20:32:53.909695 kubelet[2539]: I0113 20:32:53.908545    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153"
Jan 13 20:32:53.909789 containerd[1468]: time="2025-01-13T20:32:53.909053691Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\""
Jan 13 20:32:53.909789 containerd[1468]: time="2025-01-13T20:32:53.909191975Z" level=info msg="Ensure that sandbox 2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153 in task-service has been cleanup successfully"
Jan 13 20:32:53.909789 containerd[1468]: time="2025-01-13T20:32:53.909697190Z" level=info msg="TearDown network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" successfully"
Jan 13 20:32:53.909789 containerd[1468]: time="2025-01-13T20:32:53.909717911Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" returns successfully"
Jan 13 20:32:53.910116 kubelet[2539]: E0113 20:32:53.910095    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:53.910746 systemd[1]: run-netns-cni\x2d3a9a3fdb\x2d38a4\x2d6970\x2de40f\x2d5a8c9ef01eb8.mount: Deactivated successfully.
Jan 13 20:32:53.910855 systemd[1]: run-netns-cni\x2de274cc99\x2d039f\x2db947\x2d437a\x2df923d1083cfb.mount: Deactivated successfully.
Jan 13 20:32:53.913485 kubelet[2539]: I0113 20:32:53.912150    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2"
Jan 13 20:32:53.913573 containerd[1468]: time="2025-01-13T20:32:53.912534032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:1,}"
Jan 13 20:32:53.913573 containerd[1468]: time="2025-01-13T20:32:53.912820720Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\""
Jan 13 20:32:53.913573 containerd[1468]: time="2025-01-13T20:32:53.912974045Z" level=info msg="Ensure that sandbox 227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2 in task-service has been cleanup successfully"
Jan 13 20:32:53.913858 containerd[1468]: time="2025-01-13T20:32:53.913664745Z" level=info msg="TearDown network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" successfully"
Jan 13 20:32:53.913858 containerd[1468]: time="2025-01-13T20:32:53.913687905Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" returns successfully"
Jan 13 20:32:53.914290 kubelet[2539]: E0113 20:32:53.914139    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:53.914268 systemd[1]: run-netns-cni\x2db30221ac\x2d7c30\x2d6a45\x2dab30\x2d59500e113bb4.mount: Deactivated successfully.
Jan 13 20:32:53.914877 containerd[1468]: time="2025-01-13T20:32:53.914577291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:1,}"
Jan 13 20:32:53.915692 kubelet[2539]: I0113 20:32:53.915671    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e"
Jan 13 20:32:53.917884 containerd[1468]: time="2025-01-13T20:32:53.916780435Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\""
Jan 13 20:32:53.917884 containerd[1468]: time="2025-01-13T20:32:53.916948200Z" level=info msg="Ensure that sandbox fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e in task-service has been cleanup successfully"
Jan 13 20:32:53.918088 containerd[1468]: time="2025-01-13T20:32:53.918068512Z" level=info msg="TearDown network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" successfully"
Jan 13 20:32:53.918192 containerd[1468]: time="2025-01-13T20:32:53.918126994Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" returns successfully"
Jan 13 20:32:53.918834 containerd[1468]: time="2025-01-13T20:32:53.918603528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:1,}"
Jan 13 20:32:54.143418 containerd[1468]: time="2025-01-13T20:32:54.143304966Z" level=error msg="Failed to destroy network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.144227 containerd[1468]: time="2025-01-13T20:32:54.144126669Z" level=error msg="encountered an error cleaning up failed sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.144651 containerd[1468]: time="2025-01-13T20:32:54.144514320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.145113 kubelet[2539]: E0113 20:32:54.144986    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.145113 kubelet[2539]: E0113 20:32:54.145055    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:54.145113 kubelet[2539]: E0113 20:32:54.145078    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:54.145277 kubelet[2539]: E0113 20:32:54.145116    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b5874dc7-pn7tk_calico-apiserver(020d487a-592d-466b-b805-5233e1a92845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b5874dc7-pn7tk_calico-apiserver(020d487a-592d-466b-b805-5233e1a92845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk" podUID="020d487a-592d-466b-b805-5233e1a92845"
Jan 13 20:32:54.211021 containerd[1468]: time="2025-01-13T20:32:54.210870572Z" level=error msg="Failed to destroy network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.212046 containerd[1468]: time="2025-01-13T20:32:54.211835879Z" level=error msg="encountered an error cleaning up failed sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.212046 containerd[1468]: time="2025-01-13T20:32:54.211938922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.212641 kubelet[2539]: E0113 20:32:54.212306    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.212726 kubelet[2539]: E0113 20:32:54.212682    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:54.212726 kubelet[2539]: E0113 20:32:54.212706    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:54.212995 kubelet[2539]: E0113 20:32:54.212795    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b5874dc7-m22rh_calico-apiserver(5c636eda-04d8-4d14-8cff-128a25fb05c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b5874dc7-m22rh_calico-apiserver(5c636eda-04d8-4d14-8cff-128a25fb05c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh" podUID="5c636eda-04d8-4d14-8cff-128a25fb05c4"
Jan 13 20:32:54.221445 containerd[1468]: time="2025-01-13T20:32:54.221312984Z" level=error msg="Failed to destroy network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.222165 containerd[1468]: time="2025-01-13T20:32:54.222131287Z" level=error msg="encountered an error cleaning up failed sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.222472 containerd[1468]: time="2025-01-13T20:32:54.222311612Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.222472 containerd[1468]: time="2025-01-13T20:32:54.222323652Z" level=error msg="Failed to destroy network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.222801 kubelet[2539]: E0113 20:32:54.222751    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.222883 kubelet[2539]: E0113 20:32:54.222821    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:54.222883 kubelet[2539]: E0113 20:32:54.222841    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:54.222986 kubelet[2539]: E0113 20:32:54.222892    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8574fbbb74-f4267_calico-system(45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8574fbbb74-f4267_calico-system(45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267" podUID="45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1"
Jan 13 20:32:54.223243 containerd[1468]: time="2025-01-13T20:32:54.223212997Z" level=error msg="encountered an error cleaning up failed sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.223439 containerd[1468]: time="2025-01-13T20:32:54.223416083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.224238 kubelet[2539]: E0113 20:32:54.223609    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.224238 kubelet[2539]: E0113 20:32:54.223960    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:54.224238 kubelet[2539]: E0113 20:32:54.223985    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:54.224349 kubelet[2539]: E0113 20:32:54.224028    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2rhs8_kube-system(94b6ed52-6d9b-4b44-8661-86ed9c610caa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2rhs8_kube-system(94b6ed52-6d9b-4b44-8661-86ed9c610caa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2rhs8" podUID="94b6ed52-6d9b-4b44-8661-86ed9c610caa"
Jan 13 20:32:54.237447 containerd[1468]: time="2025-01-13T20:32:54.237112625Z" level=error msg="Failed to destroy network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.239682 containerd[1468]: time="2025-01-13T20:32:54.239417010Z" level=error msg="encountered an error cleaning up failed sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.240113 containerd[1468]: time="2025-01-13T20:32:54.239977425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.240774 kubelet[2539]: E0113 20:32:54.240735    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.240867 kubelet[2539]: E0113 20:32:54.240792    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:54.240867 kubelet[2539]: E0113 20:32:54.240815    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:54.240867 kubelet[2539]: E0113 20:32:54.240852    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gfjnm_kube-system(956a79b0-4c28-4303-8930-015d57ee6a8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gfjnm_kube-system(956a79b0-4c28-4303-8930-015d57ee6a8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gfjnm" podUID="956a79b0-4c28-4303-8930-015d57ee6a8d"
Jan 13 20:32:54.247789 containerd[1468]: time="2025-01-13T20:32:54.247743922Z" level=error msg="Failed to destroy network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.248359 containerd[1468]: time="2025-01-13T20:32:54.248206095Z" level=error msg="encountered an error cleaning up failed sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.248359 containerd[1468]: time="2025-01-13T20:32:54.248261336Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.248825 kubelet[2539]: E0113 20:32:54.248780    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:54.249482 kubelet[2539]: E0113 20:32:54.248840    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:54.249482 kubelet[2539]: E0113 20:32:54.248858    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:54.249482 kubelet[2539]: E0113 20:32:54.248902    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mz55k_calico-system(1ccd530c-1ce4-41fc-b0fc-1d9142439edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mz55k_calico-system(1ccd530c-1ce4-41fc-b0fc-1d9142439edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mz55k" podUID="1ccd530c-1ce4-41fc-b0fc-1d9142439edd"
Jan 13 20:32:54.818379 systemd[1]: run-netns-cni\x2d38e8904d\x2d74cb\x2d6d33\x2d43fc\x2d204444a2dce7.mount: Deactivated successfully.
Jan 13 20:32:54.818473 systemd[1]: run-netns-cni\x2d2bae4a29\x2d20fa\x2d95d2\x2d51fd\x2d780646b0dc54.mount: Deactivated successfully.
Jan 13 20:32:54.918930 kubelet[2539]: I0113 20:32:54.918853    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820"
Jan 13 20:32:54.919548 containerd[1468]: time="2025-01-13T20:32:54.919438436Z" level=info msg="StopPodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\""
Jan 13 20:32:54.919801 containerd[1468]: time="2025-01-13T20:32:54.919584160Z" level=info msg="Ensure that sandbox 690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820 in task-service has been cleanup successfully"
Jan 13 20:32:54.919801 containerd[1468]: time="2025-01-13T20:32:54.919758765Z" level=info msg="TearDown network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" successfully"
Jan 13 20:32:54.919801 containerd[1468]: time="2025-01-13T20:32:54.919771805Z" level=info msg="StopPodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" returns successfully"
Jan 13 20:32:54.924831 kubelet[2539]: I0113 20:32:54.920394    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f"
Jan 13 20:32:54.924831 kubelet[2539]: E0113 20:32:54.922558    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:54.924831 kubelet[2539]: I0113 20:32:54.923169    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91"
Jan 13 20:32:54.922859 systemd[1]: run-netns-cni\x2d4f986332\x2d6737\x2d8aec\x2d5bd7\x2d82e27471648e.mount: Deactivated successfully.
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.920874356Z" level=info msg="StopPodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\""
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.921193925Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\""
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.921269567Z" level=info msg="TearDown network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.921280527Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" returns successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.921623377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:2,}"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.921729420Z" level=info msg="Ensure that sandbox 9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f in task-service has been cleanup successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.921936466Z" level=info msg="TearDown network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.921950506Z" level=info msg="StopPodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" returns successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.922305676Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\""
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.922368678Z" level=info msg="TearDown network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.922378158Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" returns successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.922816330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:2,}"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.924310252Z" level=info msg="StopPodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\""
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.924438456Z" level=info msg="Ensure that sandbox b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91 in task-service has been cleanup successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.924747784Z" level=info msg="TearDown network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" successfully"
Jan 13 20:32:54.925118 containerd[1468]: time="2025-01-13T20:32:54.924761265Z" level=info msg="StopPodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" returns successfully"
Jan 13 20:32:54.925863 systemd[1]: run-netns-cni\x2d6b4b00f8\x2da3b5\x2da101\x2d5042\x2dcd22e1836b1e.mount: Deactivated successfully.
Jan 13 20:32:54.926454 kubelet[2539]: E0113 20:32:54.926140    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:54.926488 containerd[1468]: time="2025-01-13T20:32:54.925866535Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\""
Jan 13 20:32:54.926488 containerd[1468]: time="2025-01-13T20:32:54.925968578Z" level=info msg="TearDown network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" successfully"
Jan 13 20:32:54.926488 containerd[1468]: time="2025-01-13T20:32:54.925978579Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" returns successfully"
Jan 13 20:32:54.927078 containerd[1468]: time="2025-01-13T20:32:54.926958406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:2,}"
Jan 13 20:32:54.928842 kubelet[2539]: I0113 20:32:54.928811    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a"
Jan 13 20:32:54.929367 systemd[1]: run-netns-cni\x2d6e502e08\x2dedc1\x2ddb7c\x2d0883\x2d9df48272dabe.mount: Deactivated successfully.
Jan 13 20:32:54.929811 containerd[1468]: time="2025-01-13T20:32:54.929778885Z" level=info msg="StopPodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\""
Jan 13 20:32:54.930077 containerd[1468]: time="2025-01-13T20:32:54.930022851Z" level=info msg="Ensure that sandbox 6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a in task-service has been cleanup successfully"
Jan 13 20:32:54.932823 systemd[1]: run-netns-cni\x2df536df2f\x2dad71\x2d7dbb\x2d5c19\x2d23f3dd4965d0.mount: Deactivated successfully.
Jan 13 20:32:54.939632 containerd[1468]: time="2025-01-13T20:32:54.937186051Z" level=info msg="TearDown network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" successfully"
Jan 13 20:32:54.939632 containerd[1468]: time="2025-01-13T20:32:54.937223893Z" level=info msg="StopPodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" returns successfully"
Jan 13 20:32:54.939632 containerd[1468]: time="2025-01-13T20:32:54.938478728Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\""
Jan 13 20:32:54.939632 containerd[1468]: time="2025-01-13T20:32:54.938568490Z" level=info msg="TearDown network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" successfully"
Jan 13 20:32:54.939632 containerd[1468]: time="2025-01-13T20:32:54.938578890Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" returns successfully"
Jan 13 20:32:54.939632 containerd[1468]: time="2025-01-13T20:32:54.939238869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:2,}"
Jan 13 20:32:54.941594 kubelet[2539]: I0113 20:32:54.941481    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f"
Jan 13 20:32:54.942766 containerd[1468]: time="2025-01-13T20:32:54.942065628Z" level=info msg="StopPodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\""
Jan 13 20:32:54.942766 containerd[1468]: time="2025-01-13T20:32:54.942260233Z" level=info msg="Ensure that sandbox 2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f in task-service has been cleanup successfully"
Jan 13 20:32:54.942766 containerd[1468]: time="2025-01-13T20:32:54.942464199Z" level=info msg="TearDown network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" successfully"
Jan 13 20:32:54.942766 containerd[1468]: time="2025-01-13T20:32:54.942489320Z" level=info msg="StopPodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" returns successfully"
Jan 13 20:32:54.943419 containerd[1468]: time="2025-01-13T20:32:54.943373544Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\""
Jan 13 20:32:54.943487 containerd[1468]: time="2025-01-13T20:32:54.943464547Z" level=info msg="TearDown network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" successfully"
Jan 13 20:32:54.943536 containerd[1468]: time="2025-01-13T20:32:54.943513628Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" returns successfully"
Jan 13 20:32:54.944057 containerd[1468]: time="2025-01-13T20:32:54.944026322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:2,}"
Jan 13 20:32:54.944951 kubelet[2539]: I0113 20:32:54.944852    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209"
Jan 13 20:32:54.945394 containerd[1468]: time="2025-01-13T20:32:54.945358840Z" level=info msg="StopPodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\""
Jan 13 20:32:54.945681 containerd[1468]: time="2025-01-13T20:32:54.945543285Z" level=info msg="Ensure that sandbox c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209 in task-service has been cleanup successfully"
Jan 13 20:32:54.946224 containerd[1468]: time="2025-01-13T20:32:54.946198263Z" level=info msg="TearDown network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" successfully"
Jan 13 20:32:54.946224 containerd[1468]: time="2025-01-13T20:32:54.946224144Z" level=info msg="StopPodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" returns successfully"
Jan 13 20:32:54.946568 containerd[1468]: time="2025-01-13T20:32:54.946542353Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\""
Jan 13 20:32:54.946633 containerd[1468]: time="2025-01-13T20:32:54.946620275Z" level=info msg="TearDown network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" successfully"
Jan 13 20:32:54.946669 containerd[1468]: time="2025-01-13T20:32:54.946632915Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" returns successfully"
Jan 13 20:32:54.947120 containerd[1468]: time="2025-01-13T20:32:54.947094088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:2,}"
Jan 13 20:32:55.151813 containerd[1468]: time="2025-01-13T20:32:55.151665055Z" level=error msg="Failed to destroy network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.154104 containerd[1468]: time="2025-01-13T20:32:55.154057319Z" level=error msg="Failed to destroy network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.154629 containerd[1468]: time="2025-01-13T20:32:55.154584493Z" level=error msg="encountered an error cleaning up failed sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.154692 containerd[1468]: time="2025-01-13T20:32:55.154659015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.154692 containerd[1468]: time="2025-01-13T20:32:55.154673536Z" level=error msg="encountered an error cleaning up failed sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.154745 containerd[1468]: time="2025-01-13T20:32:55.154723297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.154901 kubelet[2539]: E0113 20:32:55.154861    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.155002 kubelet[2539]: E0113 20:32:55.154937    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:55.155002 kubelet[2539]: E0113 20:32:55.154965    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:55.155072 kubelet[2539]: E0113 20:32:55.155001    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gfjnm_kube-system(956a79b0-4c28-4303-8930-015d57ee6a8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gfjnm_kube-system(956a79b0-4c28-4303-8930-015d57ee6a8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gfjnm" podUID="956a79b0-4c28-4303-8930-015d57ee6a8d"
Jan 13 20:32:55.157068 kubelet[2539]: E0113 20:32:55.157003    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.157132 kubelet[2539]: E0113 20:32:55.157083    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:55.157132 kubelet[2539]: E0113 20:32:55.157117    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:55.157197 kubelet[2539]: E0113 20:32:55.157155    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b5874dc7-m22rh_calico-apiserver(5c636eda-04d8-4d14-8cff-128a25fb05c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b5874dc7-m22rh_calico-apiserver(5c636eda-04d8-4d14-8cff-128a25fb05c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh" podUID="5c636eda-04d8-4d14-8cff-128a25fb05c4"
Jan 13 20:32:55.160294 containerd[1468]: time="2025-01-13T20:32:55.160249326Z" level=error msg="Failed to destroy network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.160848 containerd[1468]: time="2025-01-13T20:32:55.160815181Z" level=error msg="encountered an error cleaning up failed sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.160918 containerd[1468]: time="2025-01-13T20:32:55.160876263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.161517 kubelet[2539]: E0113 20:32:55.161451    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.161574 kubelet[2539]: E0113 20:32:55.161525    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:55.161574 kubelet[2539]: E0113 20:32:55.161544    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:55.161667 kubelet[2539]: E0113 20:32:55.161592    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2rhs8_kube-system(94b6ed52-6d9b-4b44-8661-86ed9c610caa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2rhs8_kube-system(94b6ed52-6d9b-4b44-8661-86ed9c610caa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2rhs8" podUID="94b6ed52-6d9b-4b44-8661-86ed9c610caa"
Jan 13 20:32:55.168774 containerd[1468]: time="2025-01-13T20:32:55.168734675Z" level=error msg="Failed to destroy network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.169096 containerd[1468]: time="2025-01-13T20:32:55.169073164Z" level=error msg="encountered an error cleaning up failed sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.169147 containerd[1468]: time="2025-01-13T20:32:55.169128525Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.169343 kubelet[2539]: E0113 20:32:55.169313    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.169378 kubelet[2539]: E0113 20:32:55.169365    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:55.169409 kubelet[2539]: E0113 20:32:55.169384    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:55.169438 kubelet[2539]: E0113 20:32:55.169417    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b5874dc7-pn7tk_calico-apiserver(020d487a-592d-466b-b805-5233e1a92845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b5874dc7-pn7tk_calico-apiserver(020d487a-592d-466b-b805-5233e1a92845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk" podUID="020d487a-592d-466b-b805-5233e1a92845"
Jan 13 20:32:55.171312 containerd[1468]: time="2025-01-13T20:32:55.171274303Z" level=error msg="Failed to destroy network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.172643 containerd[1468]: time="2025-01-13T20:32:55.172451055Z" level=error msg="encountered an error cleaning up failed sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.172643 containerd[1468]: time="2025-01-13T20:32:55.172504976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.172753 kubelet[2539]: E0113 20:32:55.172667    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.172783 kubelet[2539]: E0113 20:32:55.172750    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:55.172783 kubelet[2539]: E0113 20:32:55.172768    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:55.172880 kubelet[2539]: E0113 20:32:55.172837    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8574fbbb74-f4267_calico-system(45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8574fbbb74-f4267_calico-system(45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267" podUID="45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1"
Jan 13 20:32:55.185174 containerd[1468]: time="2025-01-13T20:32:55.185137117Z" level=error msg="Failed to destroy network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.185765 containerd[1468]: time="2025-01-13T20:32:55.185654251Z" level=error msg="encountered an error cleaning up failed sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.185765 containerd[1468]: time="2025-01-13T20:32:55.185737853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.186178 kubelet[2539]: E0113 20:32:55.186136    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:55.186256 kubelet[2539]: E0113 20:32:55.186234    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:55.186283 kubelet[2539]: E0113 20:32:55.186255    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:55.187225 kubelet[2539]: E0113 20:32:55.186326    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mz55k_calico-system(1ccd530c-1ce4-41fc-b0fc-1d9142439edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mz55k_calico-system(1ccd530c-1ce4-41fc-b0fc-1d9142439edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mz55k" podUID="1ccd530c-1ce4-41fc-b0fc-1d9142439edd"
Jan 13 20:32:55.819247 systemd[1]: run-netns-cni\x2d06de476c\x2df8a6\x2d6fb1\x2d4ea8\x2d9cb8186ccf18.mount: Deactivated successfully.
Jan 13 20:32:55.819334 systemd[1]: run-netns-cni\x2d683e209f\x2d1d70\x2d79b9\x2da9fe\x2dd3001d23f878.mount: Deactivated successfully.
Jan 13 20:32:55.947955 kubelet[2539]: I0113 20:32:55.947758    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244"
Jan 13 20:32:55.949730 containerd[1468]: time="2025-01-13T20:32:55.949351278Z" level=info msg="StopPodSandbox for \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\""
Jan 13 20:32:55.949730 containerd[1468]: time="2025-01-13T20:32:55.949534563Z" level=info msg="Ensure that sandbox 86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244 in task-service has been cleanup successfully"
Jan 13 20:32:55.950306 containerd[1468]: time="2025-01-13T20:32:55.950192461Z" level=info msg="TearDown network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\" successfully"
Jan 13 20:32:55.950306 containerd[1468]: time="2025-01-13T20:32:55.950225302Z" level=info msg="StopPodSandbox for \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\" returns successfully"
Jan 13 20:32:55.952348 systemd[1]: run-netns-cni\x2d9d042ba2\x2da7d6\x2d17ff\x2d3614\x2d686b7b8f0e74.mount: Deactivated successfully.
Jan 13 20:32:55.953699 containerd[1468]: time="2025-01-13T20:32:55.953452469Z" level=info msg="StopPodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\""
Jan 13 20:32:55.953699 containerd[1468]: time="2025-01-13T20:32:55.953544351Z" level=info msg="TearDown network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" successfully"
Jan 13 20:32:55.953699 containerd[1468]: time="2025-01-13T20:32:55.953555792Z" level=info msg="StopPodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" returns successfully"
Jan 13 20:32:55.953808 containerd[1468]: time="2025-01-13T20:32:55.953782198Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\""
Jan 13 20:32:55.953871 containerd[1468]: time="2025-01-13T20:32:55.953848000Z" level=info msg="TearDown network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" successfully"
Jan 13 20:32:55.953871 containerd[1468]: time="2025-01-13T20:32:55.953862840Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" returns successfully"
Jan 13 20:32:55.956038 containerd[1468]: time="2025-01-13T20:32:55.955987097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:3,}"
Jan 13 20:32:55.956612 kubelet[2539]: I0113 20:32:55.956585    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336"
Jan 13 20:32:55.957448 containerd[1468]: time="2025-01-13T20:32:55.957412416Z" level=info msg="StopPodSandbox for \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\""
Jan 13 20:32:55.957736 containerd[1468]: time="2025-01-13T20:32:55.957606461Z" level=info msg="Ensure that sandbox 1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336 in task-service has been cleanup successfully"
Jan 13 20:32:55.958047 containerd[1468]: time="2025-01-13T20:32:55.957836027Z" level=info msg="TearDown network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\" successfully"
Jan 13 20:32:55.958047 containerd[1468]: time="2025-01-13T20:32:55.957852788Z" level=info msg="StopPodSandbox for \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\" returns successfully"
Jan 13 20:32:55.959292 containerd[1468]: time="2025-01-13T20:32:55.958611448Z" level=info msg="StopPodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\""
Jan 13 20:32:55.959292 containerd[1468]: time="2025-01-13T20:32:55.958686730Z" level=info msg="TearDown network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" successfully"
Jan 13 20:32:55.959292 containerd[1468]: time="2025-01-13T20:32:55.958696850Z" level=info msg="StopPodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" returns successfully"
Jan 13 20:32:55.960948 containerd[1468]: time="2025-01-13T20:32:55.959669397Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\""
Jan 13 20:32:55.960337 systemd[1]: run-netns-cni\x2d8a550d47\x2d3607\x2d5327\x2d7100\x2d8b4eca0acf1c.mount: Deactivated successfully.
Jan 13 20:32:55.961087 containerd[1468]: time="2025-01-13T20:32:55.961062154Z" level=info msg="TearDown network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" successfully"
Jan 13 20:32:55.961087 containerd[1468]: time="2025-01-13T20:32:55.961080595Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" returns successfully"
Jan 13 20:32:55.961466 kubelet[2539]: I0113 20:32:55.961444    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899"
Jan 13 20:32:55.961519 containerd[1468]: time="2025-01-13T20:32:55.961464885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:3,}"
Jan 13 20:32:55.962237 containerd[1468]: time="2025-01-13T20:32:55.962210745Z" level=info msg="StopPodSandbox for \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\""
Jan 13 20:32:55.962807 containerd[1468]: time="2025-01-13T20:32:55.962777400Z" level=info msg="Ensure that sandbox 4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899 in task-service has been cleanup successfully"
Jan 13 20:32:55.963380 kubelet[2539]: I0113 20:32:55.963259    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0"
Jan 13 20:32:55.965117 systemd[1]: run-netns-cni\x2d4c7f22b2\x2d3847\x2d7dc5\x2dd330\x2d4797ae2bc0ed.mount: Deactivated successfully.
Jan 13 20:32:56.110979 kubelet[2539]: I0113 20:32:56.107455    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b"
Jan 13 20:32:56.108586 systemd[1]: run-netns-cni\x2de2e2ee38\x2dd5da\x2d6153\x2d4b19\x2d95d97af897ff.mount: Deactivated successfully.
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:55.963559221Z" level=info msg="TearDown network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\" successfully"
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.104769654Z" level=info msg="StopPodSandbox for \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\" returns successfully"
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:55.985138323Z" level=info msg="StopPodSandbox for \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\""
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.105090543Z" level=info msg="Ensure that sandbox b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0 in task-service has been cleanup successfully"
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.107070234Z" level=info msg="StopPodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\""
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.107169237Z" level=info msg="TearDown network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" successfully"
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.107179397Z" level=info msg="StopPodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" returns successfully"
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.108445670Z" level=info msg="StopPodSandbox for \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\""
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.108675676Z" level=info msg="Ensure that sandbox c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b in task-service has been cleanup successfully"
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.109006245Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\""
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.109120928Z" level=info msg="TearDown network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" successfully"
Jan 13 20:32:56.111466 containerd[1468]: time="2025-01-13T20:32:56.109141128Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" returns successfully"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.112847025Z" level=info msg="TearDown network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\" successfully"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.112881106Z" level=info msg="StopPodSandbox for \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\" returns successfully"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.113099271Z" level=info msg="TearDown network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\" successfully"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.113114672Z" level=info msg="StopPodSandbox for \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\" returns successfully"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.114258982Z" level=info msg="StopPodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\""
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.114450467Z" level=info msg="TearDown network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" successfully"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.114564310Z" level=info msg="StopPodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" returns successfully"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.114955400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:3,}"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.115319049Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\""
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.115421732Z" level=info msg="TearDown network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" successfully"
Jan 13 20:32:56.115887 containerd[1468]: time="2025-01-13T20:32:56.115438212Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" returns successfully"
Jan 13 20:32:56.116331 kubelet[2539]: E0113 20:32:56.113087    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:56.116331 kubelet[2539]: E0113 20:32:56.115624    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:56.116397 containerd[1468]: time="2025-01-13T20:32:56.115971866Z" level=info msg="StopPodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\""
Jan 13 20:32:56.116503 containerd[1468]: time="2025-01-13T20:32:56.116477679Z" level=info msg="TearDown network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" successfully"
Jan 13 20:32:56.116503 containerd[1468]: time="2025-01-13T20:32:56.116498600Z" level=info msg="StopPodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" returns successfully"
Jan 13 20:32:56.116969 containerd[1468]: time="2025-01-13T20:32:56.116939531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:3,}"
Jan 13 20:32:56.120233 containerd[1468]: time="2025-01-13T20:32:56.120173416Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\""
Jan 13 20:32:56.120440 containerd[1468]: time="2025-01-13T20:32:56.120403862Z" level=info msg="TearDown network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" successfully"
Jan 13 20:32:56.120440 containerd[1468]: time="2025-01-13T20:32:56.120436143Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" returns successfully"
Jan 13 20:32:56.120960 containerd[1468]: time="2025-01-13T20:32:56.120901075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:3,}"
Jan 13 20:32:56.147775 kubelet[2539]: I0113 20:32:56.147734    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326"
Jan 13 20:32:56.157772 containerd[1468]: time="2025-01-13T20:32:56.157709554Z" level=info msg="StopPodSandbox for \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\""
Jan 13 20:32:56.158043 containerd[1468]: time="2025-01-13T20:32:56.157897079Z" level=info msg="Ensure that sandbox 98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326 in task-service has been cleanup successfully"
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.158799182Z" level=info msg="TearDown network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\" successfully"
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.158827983Z" level=info msg="StopPodSandbox for \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\" returns successfully"
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.159311235Z" level=info msg="StopPodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\""
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.159389757Z" level=info msg="TearDown network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" successfully"
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.159400158Z" level=info msg="StopPodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" returns successfully"
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.159660485Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\""
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.159725646Z" level=info msg="TearDown network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" successfully"
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.159734006Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" returns successfully"
Jan 13 20:32:56.160915 containerd[1468]: time="2025-01-13T20:32:56.160696031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:3,}"
Jan 13 20:32:56.277268 containerd[1468]: time="2025-01-13T20:32:56.277211507Z" level=error msg="Failed to destroy network for sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.277609 containerd[1468]: time="2025-01-13T20:32:56.277523555Z" level=error msg="encountered an error cleaning up failed sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.277609 containerd[1468]: time="2025-01-13T20:32:56.277584277Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.277949 kubelet[2539]: E0113 20:32:56.277809    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.277949 kubelet[2539]: E0113 20:32:56.277875    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:56.277949 kubelet[2539]: E0113 20:32:56.277902    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh"
Jan 13 20:32:56.278075 kubelet[2539]: E0113 20:32:56.277962    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b5874dc7-m22rh_calico-apiserver(5c636eda-04d8-4d14-8cff-128a25fb05c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b5874dc7-m22rh_calico-apiserver(5c636eda-04d8-4d14-8cff-128a25fb05c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh" podUID="5c636eda-04d8-4d14-8cff-128a25fb05c4"
Jan 13 20:32:56.297626 containerd[1468]: time="2025-01-13T20:32:56.297539037Z" level=error msg="Failed to destroy network for sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.298428 containerd[1468]: time="2025-01-13T20:32:56.298200694Z" level=error msg="encountered an error cleaning up failed sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.298428 containerd[1468]: time="2025-01-13T20:32:56.298301137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.298428 containerd[1468]: time="2025-01-13T20:32:56.297873406Z" level=error msg="Failed to destroy network for sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.299277 kubelet[2539]: E0113 20:32:56.299228    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.299355 kubelet[2539]: E0113 20:32:56.299291    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:56.299355 kubelet[2539]: E0113 20:32:56.299320    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-2rhs8"
Jan 13 20:32:56.299412 kubelet[2539]: E0113 20:32:56.299364    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-2rhs8_kube-system(94b6ed52-6d9b-4b44-8661-86ed9c610caa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-2rhs8_kube-system(94b6ed52-6d9b-4b44-8661-86ed9c610caa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-2rhs8" podUID="94b6ed52-6d9b-4b44-8661-86ed9c610caa"
Jan 13 20:32:56.299470 containerd[1468]: time="2025-01-13T20:32:56.299391565Z" level=error msg="encountered an error cleaning up failed sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.299568 containerd[1468]: time="2025-01-13T20:32:56.299530449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.299736 kubelet[2539]: E0113 20:32:56.299696    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.299799 kubelet[2539]: E0113 20:32:56.299748    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:56.299799 kubelet[2539]: E0113 20:32:56.299765    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-gfjnm"
Jan 13 20:32:56.299885 kubelet[2539]: E0113 20:32:56.299802    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-gfjnm_kube-system(956a79b0-4c28-4303-8930-015d57ee6a8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-gfjnm_kube-system(956a79b0-4c28-4303-8930-015d57ee6a8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-gfjnm" podUID="956a79b0-4c28-4303-8930-015d57ee6a8d"
Jan 13 20:32:56.315586 containerd[1468]: time="2025-01-13T20:32:56.315538386Z" level=error msg="Failed to destroy network for sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.317306 containerd[1468]: time="2025-01-13T20:32:56.317173148Z" level=error msg="encountered an error cleaning up failed sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.317306 containerd[1468]: time="2025-01-13T20:32:56.317241070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.317537 kubelet[2539]: E0113 20:32:56.317493    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.317592 kubelet[2539]: E0113 20:32:56.317553    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:56.317592 kubelet[2539]: E0113 20:32:56.317573    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk"
Jan 13 20:32:56.317645 kubelet[2539]: E0113 20:32:56.317609    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69b5874dc7-pn7tk_calico-apiserver(020d487a-592d-466b-b805-5233e1a92845)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69b5874dc7-pn7tk_calico-apiserver(020d487a-592d-466b-b805-5233e1a92845)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk" podUID="020d487a-592d-466b-b805-5233e1a92845"
Jan 13 20:32:56.329956 containerd[1468]: time="2025-01-13T20:32:56.329887480Z" level=error msg="Failed to destroy network for sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.330354 containerd[1468]: time="2025-01-13T20:32:56.330324131Z" level=error msg="encountered an error cleaning up failed sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.330401 containerd[1468]: time="2025-01-13T20:32:56.330383933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.330606 kubelet[2539]: E0113 20:32:56.330574    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.330695 kubelet[2539]: E0113 20:32:56.330631    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:56.330695 kubelet[2539]: E0113 20:32:56.330649    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mz55k"
Jan 13 20:32:56.330747 kubelet[2539]: E0113 20:32:56.330694    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mz55k_calico-system(1ccd530c-1ce4-41fc-b0fc-1d9142439edd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mz55k_calico-system(1ccd530c-1ce4-41fc-b0fc-1d9142439edd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mz55k" podUID="1ccd530c-1ce4-41fc-b0fc-1d9142439edd"
Jan 13 20:32:56.333853 containerd[1468]: time="2025-01-13T20:32:56.333790141Z" level=error msg="Failed to destroy network for sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.334827 containerd[1468]: time="2025-01-13T20:32:56.334725046Z" level=error msg="encountered an error cleaning up failed sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.334827 containerd[1468]: time="2025-01-13T20:32:56.334784207Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.335356 kubelet[2539]: E0113 20:32:56.335212    2539 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 13 20:32:56.335356 kubelet[2539]: E0113 20:32:56.335265    2539 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:56.335356 kubelet[2539]: E0113 20:32:56.335305    2539 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267"
Jan 13 20:32:56.335474 kubelet[2539]: E0113 20:32:56.335345    2539 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-8574fbbb74-f4267_calico-system(45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-8574fbbb74-f4267_calico-system(45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267" podUID="45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1"
Jan 13 20:32:56.415439 containerd[1468]: time="2025-01-13T20:32:56.415312986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762"
Jan 13 20:32:56.419063 containerd[1468]: time="2025-01-13T20:32:56.419025402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 3.505736192s"
Jan 13 20:32:56.419063 containerd[1468]: time="2025-01-13T20:32:56.419058523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\""
Jan 13 20:32:56.425151 containerd[1468]: time="2025-01-13T20:32:56.425117761Z" level=info msg="CreateContainer within sandbox \"45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Jan 13 20:32:56.426443 containerd[1468]: time="2025-01-13T20:32:56.426404074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:56.427112 containerd[1468]: time="2025-01-13T20:32:56.427081132Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:56.427797 containerd[1468]: time="2025-01-13T20:32:56.427762030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:56.436833 containerd[1468]: time="2025-01-13T20:32:56.436792345Z" level=info msg="CreateContainer within sandbox \"45577689656203ff1b0ada0234a8832c68b884c24c718fc7a51919e8acd9b3a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4a10a44aec708baa8244e04fd8ef179e5a0c65543213c787965b37f05bf0a4d6\""
Jan 13 20:32:56.437665 containerd[1468]: time="2025-01-13T20:32:56.437277678Z" level=info msg="StartContainer for \"4a10a44aec708baa8244e04fd8ef179e5a0c65543213c787965b37f05bf0a4d6\""
Jan 13 20:32:56.497095 systemd[1]: Started cri-containerd-4a10a44aec708baa8244e04fd8ef179e5a0c65543213c787965b37f05bf0a4d6.scope - libcontainer container 4a10a44aec708baa8244e04fd8ef179e5a0c65543213c787965b37f05bf0a4d6.
Jan 13 20:32:56.525616 containerd[1468]: time="2025-01-13T20:32:56.525574138Z" level=info msg="StartContainer for \"4a10a44aec708baa8244e04fd8ef179e5a0c65543213c787965b37f05bf0a4d6\" returns successfully"
Jan 13 20:32:56.683783 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Jan 13 20:32:56.683891 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Jan 13 20:32:56.819623 systemd[1]: run-netns-cni\x2da18e7767\x2daece\x2dfb04\x2de53a\x2d27796709feb3.mount: Deactivated successfully.
Jan 13 20:32:56.819734 systemd[1]: run-netns-cni\x2d0d5f095f\x2dc103\x2d3312\x2d312a\x2d96fa38903326.mount: Deactivated successfully.
Jan 13 20:32:56.819780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2963977866.mount: Deactivated successfully.
Jan 13 20:32:57.151262 kubelet[2539]: I0113 20:32:57.151156    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806"
Jan 13 20:32:57.152053 containerd[1468]: time="2025-01-13T20:32:57.152018212Z" level=info msg="StopPodSandbox for \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\""
Jan 13 20:32:57.152294 containerd[1468]: time="2025-01-13T20:32:57.152232658Z" level=info msg="Ensure that sandbox 4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806 in task-service has been cleanup successfully"
Jan 13 20:32:57.154821 systemd[1]: run-netns-cni\x2dbd336edf\x2d1859\x2df2d9\x2d234c\x2d680dd4353d3e.mount: Deactivated successfully.
Jan 13 20:32:57.155580 containerd[1468]: time="2025-01-13T20:32:57.155495820Z" level=info msg="TearDown network for sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\" successfully"
Jan 13 20:32:57.155580 containerd[1468]: time="2025-01-13T20:32:57.155522821Z" level=info msg="StopPodSandbox for \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\" returns successfully"
Jan 13 20:32:57.160625 containerd[1468]: time="2025-01-13T20:32:57.160567508Z" level=info msg="StopPodSandbox for \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\""
Jan 13 20:32:57.160748 containerd[1468]: time="2025-01-13T20:32:57.160663310Z" level=info msg="TearDown network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\" successfully"
Jan 13 20:32:57.160748 containerd[1468]: time="2025-01-13T20:32:57.160674231Z" level=info msg="StopPodSandbox for \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\" returns successfully"
Jan 13 20:32:57.161130 containerd[1468]: time="2025-01-13T20:32:57.161101161Z" level=info msg="StopPodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\""
Jan 13 20:32:57.161202 containerd[1468]: time="2025-01-13T20:32:57.161185604Z" level=info msg="TearDown network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" successfully"
Jan 13 20:32:57.161202 containerd[1468]: time="2025-01-13T20:32:57.161199484Z" level=info msg="StopPodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" returns successfully"
Jan 13 20:32:57.161482 kubelet[2539]: I0113 20:32:57.161461    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3"
Jan 13 20:32:57.162089 kubelet[2539]: E0113 20:32:57.161857    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:57.162142 containerd[1468]: time="2025-01-13T20:32:57.161604854Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\""
Jan 13 20:32:57.162142 containerd[1468]: time="2025-01-13T20:32:57.161682816Z" level=info msg="TearDown network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" successfully"
Jan 13 20:32:57.162142 containerd[1468]: time="2025-01-13T20:32:57.161692016Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" returns successfully"
Jan 13 20:32:57.162142 containerd[1468]: time="2025-01-13T20:32:57.161994464Z" level=info msg="StopPodSandbox for \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\""
Jan 13 20:32:57.162142 containerd[1468]: time="2025-01-13T20:32:57.162129827Z" level=info msg="Ensure that sandbox 204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3 in task-service has been cleanup successfully"
Jan 13 20:32:57.162370 containerd[1468]: time="2025-01-13T20:32:57.162143388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:4,}"
Jan 13 20:32:57.162370 containerd[1468]: time="2025-01-13T20:32:57.162315312Z" level=info msg="TearDown network for sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\" successfully"
Jan 13 20:32:57.162370 containerd[1468]: time="2025-01-13T20:32:57.162329072Z" level=info msg="StopPodSandbox for \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\" returns successfully"
Jan 13 20:32:57.163060 containerd[1468]: time="2025-01-13T20:32:57.163034650Z" level=info msg="StopPodSandbox for \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\""
Jan 13 20:32:57.163177 containerd[1468]: time="2025-01-13T20:32:57.163114972Z" level=info msg="TearDown network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\" successfully"
Jan 13 20:32:57.163177 containerd[1468]: time="2025-01-13T20:32:57.163125052Z" level=info msg="StopPodSandbox for \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\" returns successfully"
Jan 13 20:32:57.165881 containerd[1468]: time="2025-01-13T20:32:57.165775519Z" level=info msg="StopPodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\""
Jan 13 20:32:57.165881 containerd[1468]: time="2025-01-13T20:32:57.165869362Z" level=info msg="TearDown network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" successfully"
Jan 13 20:32:57.165881 containerd[1468]: time="2025-01-13T20:32:57.165879282Z" level=info msg="StopPodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" returns successfully"
Jan 13 20:32:57.166374 containerd[1468]: time="2025-01-13T20:32:57.166228291Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\""
Jan 13 20:32:57.166374 containerd[1468]: time="2025-01-13T20:32:57.166315493Z" level=info msg="TearDown network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" successfully"
Jan 13 20:32:57.166374 containerd[1468]: time="2025-01-13T20:32:57.166326053Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" returns successfully"
Jan 13 20:32:57.166467 systemd[1]: run-netns-cni\x2d6b2cb809\x2dd3d9\x2d85e0\x2db8a6\x2deaee3a995649.mount: Deactivated successfully.
Jan 13 20:32:57.167252 kubelet[2539]: E0113 20:32:57.166459    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:57.167350 containerd[1468]: time="2025-01-13T20:32:57.167318958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:4,}"
Jan 13 20:32:57.172399 kubelet[2539]: I0113 20:32:57.171404    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e"
Jan 13 20:32:57.172506 containerd[1468]: time="2025-01-13T20:32:57.171852752Z" level=info msg="StopPodSandbox for \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\""
Jan 13 20:32:57.172506 containerd[1468]: time="2025-01-13T20:32:57.172046837Z" level=info msg="Ensure that sandbox d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e in task-service has been cleanup successfully"
Jan 13 20:32:57.172506 containerd[1468]: time="2025-01-13T20:32:57.172293724Z" level=info msg="TearDown network for sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\" successfully"
Jan 13 20:32:57.172506 containerd[1468]: time="2025-01-13T20:32:57.172312604Z" level=info msg="StopPodSandbox for \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\" returns successfully"
Jan 13 20:32:57.175619 containerd[1468]: time="2025-01-13T20:32:57.174356856Z" level=info msg="StopPodSandbox for \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\""
Jan 13 20:32:57.175619 containerd[1468]: time="2025-01-13T20:32:57.174438498Z" level=info msg="TearDown network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\" successfully"
Jan 13 20:32:57.175619 containerd[1468]: time="2025-01-13T20:32:57.174448338Z" level=info msg="StopPodSandbox for \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\" returns successfully"
Jan 13 20:32:57.175619 containerd[1468]: time="2025-01-13T20:32:57.175495484Z" level=info msg="StopPodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\""
Jan 13 20:32:57.175619 containerd[1468]: time="2025-01-13T20:32:57.175566326Z" level=info msg="TearDown network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" successfully"
Jan 13 20:32:57.175619 containerd[1468]: time="2025-01-13T20:32:57.175576326Z" level=info msg="StopPodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" returns successfully"
Jan 13 20:32:57.174814 systemd[1]: run-netns-cni\x2df13f0698\x2d8c48\x2d1773\x2d4fd5\x2dcb0d52c81596.mount: Deactivated successfully.
Jan 13 20:32:57.176737 containerd[1468]: time="2025-01-13T20:32:57.176567871Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\""
Jan 13 20:32:57.177164 kubelet[2539]: I0113 20:32:57.177127    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd"
Jan 13 20:32:57.177324 containerd[1468]: time="2025-01-13T20:32:57.177292690Z" level=info msg="TearDown network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" successfully"
Jan 13 20:32:57.177324 containerd[1468]: time="2025-01-13T20:32:57.177315330Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" returns successfully"
Jan 13 20:32:57.178596 containerd[1468]: time="2025-01-13T20:32:57.178285675Z" level=info msg="StopPodSandbox for \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\""
Jan 13 20:32:57.178668 containerd[1468]: time="2025-01-13T20:32:57.178622203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:4,}"
Jan 13 20:32:57.179184 containerd[1468]: time="2025-01-13T20:32:57.178716245Z" level=info msg="Ensure that sandbox 0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd in task-service has been cleanup successfully"
Jan 13 20:32:57.179888 containerd[1468]: time="2025-01-13T20:32:57.179857954Z" level=info msg="TearDown network for sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\" successfully"
Jan 13 20:32:57.179888 containerd[1468]: time="2025-01-13T20:32:57.179883155Z" level=info msg="StopPodSandbox for \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\" returns successfully"
Jan 13 20:32:57.182095 systemd[1]: run-netns-cni\x2d0b9d2c97\x2d73e7\x2d4021\x2d599b\x2d1a98570ccee1.mount: Deactivated successfully.
Jan 13 20:32:57.195003 containerd[1468]: time="2025-01-13T20:32:57.194891573Z" level=info msg="StopPodSandbox for \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\""
Jan 13 20:32:57.195132 containerd[1468]: time="2025-01-13T20:32:57.195035417Z" level=info msg="TearDown network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\" successfully"
Jan 13 20:32:57.195132 containerd[1468]: time="2025-01-13T20:32:57.195048137Z" level=info msg="StopPodSandbox for \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\" returns successfully"
Jan 13 20:32:57.195191 kubelet[2539]: I0113 20:32:57.195134    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mtrcp" podStartSLOduration=1.981270706 podStartE2EDuration="12.195115459s" podCreationTimestamp="2025-01-13 20:32:45 +0000 UTC" firstStartedPulling="2025-01-13 20:32:46.206032591 +0000 UTC m=+14.500897244" lastFinishedPulling="2025-01-13 20:32:56.419877344 +0000 UTC m=+24.714741997" observedRunningTime="2025-01-13 20:32:57.194641087 +0000 UTC m=+25.489505740" watchObservedRunningTime="2025-01-13 20:32:57.195115459 +0000 UTC m=+25.489980112"
Jan 13 20:32:57.199243 containerd[1468]: time="2025-01-13T20:32:57.198344900Z" level=info msg="StopPodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\""
Jan 13 20:32:57.199243 containerd[1468]: time="2025-01-13T20:32:57.198440503Z" level=info msg="TearDown network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" successfully"
Jan 13 20:32:57.199243 containerd[1468]: time="2025-01-13T20:32:57.198451543Z" level=info msg="StopPodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" returns successfully"
Jan 13 20:32:57.199528 containerd[1468]: time="2025-01-13T20:32:57.199478049Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\""
Jan 13 20:32:57.199553 kubelet[2539]: I0113 20:32:57.199533    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6"
Jan 13 20:32:57.200324 containerd[1468]: time="2025-01-13T20:32:57.199603732Z" level=info msg="TearDown network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" successfully"
Jan 13 20:32:57.200324 containerd[1468]: time="2025-01-13T20:32:57.199620572Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" returns successfully"
Jan 13 20:32:57.200324 containerd[1468]: time="2025-01-13T20:32:57.200065224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:4,}"
Jan 13 20:32:57.201643 containerd[1468]: time="2025-01-13T20:32:57.201615663Z" level=info msg="StopPodSandbox for \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\""
Jan 13 20:32:57.201772 containerd[1468]: time="2025-01-13T20:32:57.201748826Z" level=info msg="Ensure that sandbox 0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6 in task-service has been cleanup successfully"
Jan 13 20:32:57.202859 containerd[1468]: time="2025-01-13T20:32:57.202820653Z" level=info msg="TearDown network for sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\" successfully"
Jan 13 20:32:57.202859 containerd[1468]: time="2025-01-13T20:32:57.202843334Z" level=info msg="StopPodSandbox for \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\" returns successfully"
Jan 13 20:32:57.203469 containerd[1468]: time="2025-01-13T20:32:57.203136341Z" level=info msg="StopPodSandbox for \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\""
Jan 13 20:32:57.204449 kubelet[2539]: I0113 20:32:57.204417    2539 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d"
Jan 13 20:32:57.204891 containerd[1468]: time="2025-01-13T20:32:57.204868305Z" level=info msg="StopPodSandbox for \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\""
Jan 13 20:32:57.206494 containerd[1468]: time="2025-01-13T20:32:57.206451345Z" level=info msg="Ensure that sandbox 6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d in task-service has been cleanup successfully"
Jan 13 20:32:57.208048 containerd[1468]: time="2025-01-13T20:32:57.207147322Z" level=info msg="TearDown network for sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\" successfully"
Jan 13 20:32:57.208344 containerd[1468]: time="2025-01-13T20:32:57.208286551Z" level=info msg="StopPodSandbox for \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\" returns successfully"
Jan 13 20:32:57.209019 containerd[1468]: time="2025-01-13T20:32:57.208685241Z" level=info msg="TearDown network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\" successfully"
Jan 13 20:32:57.209019 containerd[1468]: time="2025-01-13T20:32:57.208710882Z" level=info msg="StopPodSandbox for \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\" returns successfully"
Jan 13 20:32:57.209019 containerd[1468]: time="2025-01-13T20:32:57.208832245Z" level=info msg="StopPodSandbox for \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\""
Jan 13 20:32:57.209019 containerd[1468]: time="2025-01-13T20:32:57.208893166Z" level=info msg="TearDown network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\" successfully"
Jan 13 20:32:57.209019 containerd[1468]: time="2025-01-13T20:32:57.208902046Z" level=info msg="StopPodSandbox for \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\" returns successfully"
Jan 13 20:32:57.209491 containerd[1468]: time="2025-01-13T20:32:57.209250615Z" level=info msg="StopPodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\""
Jan 13 20:32:57.209491 containerd[1468]: time="2025-01-13T20:32:57.209335497Z" level=info msg="TearDown network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" successfully"
Jan 13 20:32:57.209491 containerd[1468]: time="2025-01-13T20:32:57.209346018Z" level=info msg="StopPodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" returns successfully"
Jan 13 20:32:57.209491 containerd[1468]: time="2025-01-13T20:32:57.209251255Z" level=info msg="StopPodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\""
Jan 13 20:32:57.209491 containerd[1468]: time="2025-01-13T20:32:57.209441860Z" level=info msg="TearDown network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" successfully"
Jan 13 20:32:57.209491 containerd[1468]: time="2025-01-13T20:32:57.209452660Z" level=info msg="StopPodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" returns successfully"
Jan 13 20:32:57.210133 containerd[1468]: time="2025-01-13T20:32:57.210102837Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\""
Jan 13 20:32:57.210207 containerd[1468]: time="2025-01-13T20:32:57.210186359Z" level=info msg="TearDown network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" successfully"
Jan 13 20:32:57.210207 containerd[1468]: time="2025-01-13T20:32:57.210196959Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" returns successfully"
Jan 13 20:32:57.210353 containerd[1468]: time="2025-01-13T20:32:57.210331722Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\""
Jan 13 20:32:57.210512 kubelet[2539]: E0113 20:32:57.210429    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:57.210622 containerd[1468]: time="2025-01-13T20:32:57.210603249Z" level=info msg="TearDown network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" successfully"
Jan 13 20:32:57.210687 containerd[1468]: time="2025-01-13T20:32:57.210666331Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" returns successfully"
Jan 13 20:32:57.211047 containerd[1468]: time="2025-01-13T20:32:57.211015940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:4,}"
Jan 13 20:32:57.212383 containerd[1468]: time="2025-01-13T20:32:57.212326693Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:4,}"
Jan 13 20:32:57.655210 systemd-networkd[1393]: cali84104efaa12: Link UP
Jan 13 20:32:57.655405 systemd-networkd[1393]: cali84104efaa12: Gained carrier
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.223 [INFO][4225] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.354 [INFO][4225] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0 coredns-6f6b679f8f- kube-system  94b6ed52-6d9b-4b44-8661-86ed9c610caa 723 0 2025-01-13 20:32:38 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-6f6b679f8f-2rhs8 eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali84104efaa12  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rhs8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2rhs8-"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.354 [INFO][4225] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rhs8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.581 [INFO][4312] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" HandleID="k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Workload="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.596 [INFO][4312] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" HandleID="k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Workload="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40006821c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-2rhs8", "timestamp":"2025-01-13 20:32:57.581515039 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.598 [INFO][4312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.598 [INFO][4312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.598 [INFO][4312] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.605 [INFO][4312] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.611 [INFO][4312] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.617 [INFO][4312] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.618 [INFO][4312] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.621 [INFO][4312] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.621 [INFO][4312] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.623 [INFO][4312] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.626 [INFO][4312] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.631 [INFO][4312] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.631 [INFO][4312] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" host="localhost"
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.631 [INFO][4312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:32:57.668668 containerd[1468]: 2025-01-13 20:32:57.631 [INFO][4312] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" HandleID="k8s-pod-network.aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Workload="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0"
Jan 13 20:32:57.669228 containerd[1468]: 2025-01-13 20:32:57.637 [INFO][4225] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rhs8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"94b6ed52-6d9b-4b44-8661-86ed9c610caa", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-2rhs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84104efaa12", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:57.669228 containerd[1468]: 2025-01-13 20:32:57.637 [INFO][4225] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rhs8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0"
Jan 13 20:32:57.669228 containerd[1468]: 2025-01-13 20:32:57.637 [INFO][4225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali84104efaa12 ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rhs8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0"
Jan 13 20:32:57.669228 containerd[1468]: 2025-01-13 20:32:57.654 [INFO][4225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rhs8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0"
Jan 13 20:32:57.669228 containerd[1468]: 2025-01-13 20:32:57.654 [INFO][4225] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rhs8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"94b6ed52-6d9b-4b44-8661-86ed9c610caa", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee", Pod:"coredns-6f6b679f8f-2rhs8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali84104efaa12", MAC:"5e:9a:6f:49:e4:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:57.669228 containerd[1468]: 2025-01-13 20:32:57.666 [INFO][4225] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee" Namespace="kube-system" Pod="coredns-6f6b679f8f-2rhs8" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--2rhs8-eth0"
Jan 13 20:32:57.686530 containerd[1468]: time="2025-01-13T20:32:57.685951272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:57.686530 containerd[1468]: time="2025-01-13T20:32:57.686354202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:57.686530 containerd[1468]: time="2025-01-13T20:32:57.686367043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:57.686530 containerd[1468]: time="2025-01-13T20:32:57.686446685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:57.712097 systemd[1]: Started cri-containerd-aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee.scope - libcontainer container aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee.
Jan 13 20:32:57.725038 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:32:57.740506 systemd-networkd[1393]: cali4ae22ec1c07: Link UP
Jan 13 20:32:57.741421 systemd-networkd[1393]: cali4ae22ec1c07: Gained carrier
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.304 [INFO][4263] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.349 [INFO][4263] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0 calico-apiserver-69b5874dc7- calico-apiserver  020d487a-592d-466b-b805-5233e1a92845 724 0 2025-01-13 20:32:46 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b5874dc7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  localhost  calico-apiserver-69b5874dc7-pn7tk eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4ae22ec1c07  [] []}} ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-pn7tk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.349 [INFO][4263] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-pn7tk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.576 [INFO][4309] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" HandleID="k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Workload="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.597 [INFO][4309] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" HandleID="k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Workload="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003cfea0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69b5874dc7-pn7tk", "timestamp":"2025-01-13 20:32:57.576712598 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.597 [INFO][4309] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.631 [INFO][4309] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.631 [INFO][4309] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.702 [INFO][4309] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.707 [INFO][4309] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.716 [INFO][4309] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.719 [INFO][4309] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.721 [INFO][4309] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.721 [INFO][4309] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.724 [INFO][4309] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.729 [INFO][4309] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.735 [INFO][4309] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.735 [INFO][4309] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" host="localhost"
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.735 [INFO][4309] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:32:57.751701 containerd[1468]: 2025-01-13 20:32:57.735 [INFO][4309] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" HandleID="k8s-pod-network.11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Workload="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0"
Jan 13 20:32:57.752286 containerd[1468]: 2025-01-13 20:32:57.738 [INFO][4263] cni-plugin/k8s.go 386: Populated endpoint ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-pn7tk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0", GenerateName:"calico-apiserver-69b5874dc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"020d487a-592d-466b-b805-5233e1a92845", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b5874dc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69b5874dc7-pn7tk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ae22ec1c07", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:57.752286 containerd[1468]: 2025-01-13 20:32:57.738 [INFO][4263] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-pn7tk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0"
Jan 13 20:32:57.752286 containerd[1468]: 2025-01-13 20:32:57.738 [INFO][4263] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4ae22ec1c07 ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-pn7tk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0"
Jan 13 20:32:57.752286 containerd[1468]: 2025-01-13 20:32:57.740 [INFO][4263] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-pn7tk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0"
Jan 13 20:32:57.752286 containerd[1468]: 2025-01-13 20:32:57.740 [INFO][4263] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-pn7tk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0", GenerateName:"calico-apiserver-69b5874dc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"020d487a-592d-466b-b805-5233e1a92845", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b5874dc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c", Pod:"calico-apiserver-69b5874dc7-pn7tk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4ae22ec1c07", MAC:"76:e1:27:6f:9f:b7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:57.752286 containerd[1468]: 2025-01-13 20:32:57.749 [INFO][4263] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-pn7tk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--pn7tk-eth0"
Jan 13 20:32:57.764958 containerd[1468]: time="2025-01-13T20:32:57.764857461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-2rhs8,Uid:94b6ed52-6d9b-4b44-8661-86ed9c610caa,Namespace:kube-system,Attempt:4,} returns sandbox id \"aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee\""
Jan 13 20:32:57.765717 kubelet[2539]: E0113 20:32:57.765693    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:57.767587 containerd[1468]: time="2025-01-13T20:32:57.767549729Z" level=info msg="CreateContainer within sandbox \"aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 13 20:32:57.771891 containerd[1468]: time="2025-01-13T20:32:57.771753075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:57.771891 containerd[1468]: time="2025-01-13T20:32:57.771844037Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:57.771891 containerd[1468]: time="2025-01-13T20:32:57.771859798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:57.772036 containerd[1468]: time="2025-01-13T20:32:57.771960600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:57.787321 containerd[1468]: time="2025-01-13T20:32:57.787272266Z" level=info msg="CreateContainer within sandbox \"aedc24c80c545ababd13286ff7ebb10a6fcb1b7ea3b84b3a8f6ea70c551e72ee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"daa158b46d1f4f5e0e5707dcb458589be725de8a24901374ff8f4ff26065aedb\""
Jan 13 20:32:57.787872 containerd[1468]: time="2025-01-13T20:32:57.787717278Z" level=info msg="StartContainer for \"daa158b46d1f4f5e0e5707dcb458589be725de8a24901374ff8f4ff26065aedb\""
Jan 13 20:32:57.788134 systemd[1]: Started cri-containerd-11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c.scope - libcontainer container 11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c.
Jan 13 20:32:57.800552 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:32:57.816108 systemd[1]: Started cri-containerd-daa158b46d1f4f5e0e5707dcb458589be725de8a24901374ff8f4ff26065aedb.scope - libcontainer container daa158b46d1f4f5e0e5707dcb458589be725de8a24901374ff8f4ff26065aedb.
Jan 13 20:32:57.828172 containerd[1468]: time="2025-01-13T20:32:57.828115376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-pn7tk,Uid:020d487a-592d-466b-b805-5233e1a92845,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c\""
Jan 13 20:32:57.829396 systemd[1]: run-netns-cni\x2dcf01da2c\x2d0f54\x2d2181\x2d2892\x2d0ecf47420d78.mount: Deactivated successfully.
Jan 13 20:32:57.830102 systemd[1]: run-netns-cni\x2df5fb0fa4\x2d53e7\x2d4f88\x2d02c0\x2d2feb58fd57a3.mount: Deactivated successfully.
Jan 13 20:32:57.833081 containerd[1468]: time="2025-01-13T20:32:57.832663731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Jan 13 20:32:57.848477 systemd-networkd[1393]: cali7b2edad2543: Link UP
Jan 13 20:32:57.849535 systemd-networkd[1393]: cali7b2edad2543: Gained carrier
Jan 13 20:32:57.861116 containerd[1468]: time="2025-01-13T20:32:57.860965164Z" level=info msg="StartContainer for \"daa158b46d1f4f5e0e5707dcb458589be725de8a24901374ff8f4ff26065aedb\" returns successfully"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.229 [INFO][4236] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.352 [INFO][4236] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0 calico-kube-controllers-8574fbbb74- calico-system  45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1 719 0 2025-01-13 20:32:45 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8574fbbb74 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s  localhost  calico-kube-controllers-8574fbbb74-f4267 eth0 calico-kube-controllers [] []   [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7b2edad2543  [] []}} ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Namespace="calico-system" Pod="calico-kube-controllers-8574fbbb74-f4267" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.353 [INFO][4236] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Namespace="calico-system" Pod="calico-kube-controllers-8574fbbb74-f4267" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.577 [INFO][4311] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" HandleID="k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Workload="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.596 [INFO][4311] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" HandleID="k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Workload="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000351720), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-8574fbbb74-f4267", "timestamp":"2025-01-13 20:32:57.57795191 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.597 [INFO][4311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.735 [INFO][4311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.735 [INFO][4311] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.803 [INFO][4311] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.808 [INFO][4311] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.819 [INFO][4311] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.823 [INFO][4311] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.827 [INFO][4311] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.827 [INFO][4311] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.830 [INFO][4311] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.837 [INFO][4311] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.842 [INFO][4311] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.842 [INFO][4311] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" host="localhost"
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.842 [INFO][4311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:32:57.866107 containerd[1468]: 2025-01-13 20:32:57.842 [INFO][4311] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" HandleID="k8s-pod-network.1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Workload="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0"
Jan 13 20:32:57.866634 containerd[1468]: 2025-01-13 20:32:57.846 [INFO][4236] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Namespace="calico-system" Pod="calico-kube-controllers-8574fbbb74-f4267" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0", GenerateName:"calico-kube-controllers-8574fbbb74-", Namespace:"calico-system", SelfLink:"", UID:"45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8574fbbb74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-8574fbbb74-f4267", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b2edad2543", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:57.866634 containerd[1468]: 2025-01-13 20:32:57.846 [INFO][4236] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Namespace="calico-system" Pod="calico-kube-controllers-8574fbbb74-f4267" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0"
Jan 13 20:32:57.866634 containerd[1468]: 2025-01-13 20:32:57.846 [INFO][4236] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7b2edad2543 ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Namespace="calico-system" Pod="calico-kube-controllers-8574fbbb74-f4267" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0"
Jan 13 20:32:57.866634 containerd[1468]: 2025-01-13 20:32:57.849 [INFO][4236] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Namespace="calico-system" Pod="calico-kube-controllers-8574fbbb74-f4267" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0"
Jan 13 20:32:57.866634 containerd[1468]: 2025-01-13 20:32:57.851 [INFO][4236] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Namespace="calico-system" Pod="calico-kube-controllers-8574fbbb74-f4267" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0", GenerateName:"calico-kube-controllers-8574fbbb74-", Namespace:"calico-system", SelfLink:"", UID:"45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8574fbbb74", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401", Pod:"calico-kube-controllers-8574fbbb74-f4267", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7b2edad2543", MAC:"b6:ab:19:25:aa:fd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:57.866634 containerd[1468]: 2025-01-13 20:32:57.864 [INFO][4236] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401" Namespace="calico-system" Pod="calico-kube-controllers-8574fbbb74-f4267" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--8574fbbb74--f4267-eth0"
Jan 13 20:32:57.888727 containerd[1468]: time="2025-01-13T20:32:57.888304973Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:57.888727 containerd[1468]: time="2025-01-13T20:32:57.888683183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:57.888727 containerd[1468]: time="2025-01-13T20:32:57.888701543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:57.888918 containerd[1468]: time="2025-01-13T20:32:57.888780025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:57.911975 systemd[1]: Started cri-containerd-1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401.scope - libcontainer container 1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401.
Jan 13 20:32:57.947335 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:32:57.956679 systemd-networkd[1393]: cali035a3edb824: Link UP
Jan 13 20:32:57.957087 systemd-networkd[1393]: cali035a3edb824: Gained carrier
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.361 [INFO][4286] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.389 [INFO][4286] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0 calico-apiserver-69b5874dc7- calico-apiserver  5c636eda-04d8-4d14-8cff-128a25fb05c4 721 0 2025-01-13 20:32:46 +0000 UTC <nil> <nil> map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69b5874dc7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s  localhost  calico-apiserver-69b5874dc7-m22rh eth0 calico-apiserver [] []   [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali035a3edb824  [] []}} ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-m22rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.390 [INFO][4286] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-m22rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.576 [INFO][4332] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" HandleID="k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Workload="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.598 [INFO][4332] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" HandleID="k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Workload="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003329a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69b5874dc7-m22rh", "timestamp":"2025-01-13 20:32:57.57639243 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.598 [INFO][4332] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.842 [INFO][4332] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.842 [INFO][4332] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.905 [INFO][4332] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.910 [INFO][4332] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.918 [INFO][4332] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.920 [INFO][4332] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.923 [INFO][4332] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.923 [INFO][4332] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.924 [INFO][4332] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.930 [INFO][4332] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.943 [INFO][4332] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.943 [INFO][4332] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" host="localhost"
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.943 [INFO][4332] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:32:57.975030 containerd[1468]: 2025-01-13 20:32:57.943 [INFO][4332] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" HandleID="k8s-pod-network.a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Workload="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0"
Jan 13 20:32:57.975670 containerd[1468]: 2025-01-13 20:32:57.953 [INFO][4286] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-m22rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0", GenerateName:"calico-apiserver-69b5874dc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c636eda-04d8-4d14-8cff-128a25fb05c4", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b5874dc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69b5874dc7-m22rh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali035a3edb824", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:57.975670 containerd[1468]: 2025-01-13 20:32:57.953 [INFO][4286] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-m22rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0"
Jan 13 20:32:57.975670 containerd[1468]: 2025-01-13 20:32:57.953 [INFO][4286] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali035a3edb824 ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-m22rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0"
Jan 13 20:32:57.975670 containerd[1468]: 2025-01-13 20:32:57.956 [INFO][4286] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-m22rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0"
Jan 13 20:32:57.975670 containerd[1468]: 2025-01-13 20:32:57.956 [INFO][4286] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-m22rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0", GenerateName:"calico-apiserver-69b5874dc7-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c636eda-04d8-4d14-8cff-128a25fb05c4", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 46, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69b5874dc7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437", Pod:"calico-apiserver-69b5874dc7-m22rh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali035a3edb824", MAC:"b2:17:a0:10:df:0b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:57.975670 containerd[1468]: 2025-01-13 20:32:57.970 [INFO][4286] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437" Namespace="calico-apiserver" Pod="calico-apiserver-69b5874dc7-m22rh" WorkloadEndpoint="localhost-k8s-calico--apiserver--69b5874dc7--m22rh-eth0"
Jan 13 20:32:57.993153 containerd[1468]: time="2025-01-13T20:32:57.993101575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8574fbbb74-f4267,Uid:45eae7c1-df7d-4875-9c9e-bd5fcfaa32a1,Namespace:calico-system,Attempt:4,} returns sandbox id \"1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401\""
Jan 13 20:32:58.048789 containerd[1468]: time="2025-01-13T20:32:58.047231663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:58.048789 containerd[1468]: time="2025-01-13T20:32:58.047617872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:58.048789 containerd[1468]: time="2025-01-13T20:32:58.047713795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:58.048789 containerd[1468]: time="2025-01-13T20:32:58.047869158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:58.075333 systemd-networkd[1393]: cali0f2da388d8c: Link UP
Jan 13 20:32:58.075491 systemd-networkd[1393]: cali0f2da388d8c: Gained carrier
Jan 13 20:32:58.092277 systemd[1]: Started cri-containerd-a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437.scope - libcontainer container a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437.
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:57.352 [INFO][4279] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:57.383 [INFO][4279] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0 coredns-6f6b679f8f- kube-system  956a79b0-4c28-4303-8930-015d57ee6a8d 722 0 2025-01-13 20:32:38 +0000 UTC <nil> <nil> map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s  localhost  coredns-6f6b679f8f-gfjnm eth0 coredns [] []   [kns.kube-system ksa.kube-system.coredns] cali0f2da388d8c  [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Namespace="kube-system" Pod="coredns-6f6b679f8f-gfjnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gfjnm-"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:57.383 [INFO][4279] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Namespace="kube-system" Pod="coredns-6f6b679f8f-gfjnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:57.576 [INFO][4331] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" HandleID="k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Workload="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:57.598 [INFO][4331] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" HandleID="k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Workload="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345290), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-6f6b679f8f-gfjnm", "timestamp":"2025-01-13 20:32:57.57638687 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:57.598 [INFO][4331] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:57.944 [INFO][4331] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:57.945 [INFO][4331] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.003 [INFO][4331] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.012 [INFO][4331] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.022 [INFO][4331] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.025 [INFO][4331] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.035 [INFO][4331] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.035 [INFO][4331] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.040 [INFO][4331] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.045 [INFO][4331] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.063 [INFO][4331] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.063 [INFO][4331] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" host="localhost"
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.063 [INFO][4331] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:32:58.101937 containerd[1468]: 2025-01-13 20:32:58.063 [INFO][4331] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" HandleID="k8s-pod-network.3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Workload="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0"
Jan 13 20:32:58.102503 containerd[1468]: 2025-01-13 20:32:58.069 [INFO][4279] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Namespace="kube-system" Pod="coredns-6f6b679f8f-gfjnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"956a79b0-4c28-4303-8930-015d57ee6a8d", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-6f6b679f8f-gfjnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f2da388d8c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:58.102503 containerd[1468]: 2025-01-13 20:32:58.069 [INFO][4279] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Namespace="kube-system" Pod="coredns-6f6b679f8f-gfjnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0"
Jan 13 20:32:58.102503 containerd[1468]: 2025-01-13 20:32:58.069 [INFO][4279] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f2da388d8c ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Namespace="kube-system" Pod="coredns-6f6b679f8f-gfjnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0"
Jan 13 20:32:58.102503 containerd[1468]: 2025-01-13 20:32:58.071 [INFO][4279] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Namespace="kube-system" Pod="coredns-6f6b679f8f-gfjnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0"
Jan 13 20:32:58.102503 containerd[1468]: 2025-01-13 20:32:58.072 [INFO][4279] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Namespace="kube-system" Pod="coredns-6f6b679f8f-gfjnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"956a79b0-4c28-4303-8930-015d57ee6a8d", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 38, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2", Pod:"coredns-6f6b679f8f-gfjnm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0f2da388d8c", MAC:"6a:1f:31:9c:62:c5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:58.102503 containerd[1468]: 2025-01-13 20:32:58.095 [INFO][4279] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2" Namespace="kube-system" Pod="coredns-6f6b679f8f-gfjnm" WorkloadEndpoint="localhost-k8s-coredns--6f6b679f8f--gfjnm-eth0"
Jan 13 20:32:58.130151 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:32:58.140779 containerd[1468]: time="2025-01-13T20:32:58.140673104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:58.140956 containerd[1468]: time="2025-01-13T20:32:58.140751786Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:58.140956 containerd[1468]: time="2025-01-13T20:32:58.140767387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:58.140956 containerd[1468]: time="2025-01-13T20:32:58.140847949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:58.174501 systemd[1]: Started cri-containerd-3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2.scope - libcontainer container 3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2.
Jan 13 20:32:58.181040 systemd-networkd[1393]: cali32f8b8780c0: Link UP
Jan 13 20:32:58.182089 systemd-networkd[1393]: cali32f8b8780c0: Gained carrier
Jan 13 20:32:58.200250 containerd[1468]: time="2025-01-13T20:32:58.200009753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69b5874dc7-m22rh,Uid:5c636eda-04d8-4d14-8cff-128a25fb05c4,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437\""
Jan 13 20:32:58.202163 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:57.241 [INFO][4247] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:57.355 [INFO][4247] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mz55k-eth0 csi-node-driver- calico-system  1ccd530c-1ce4-41fc-b0fc-1d9142439edd 571 0 2025-01-13 20:32:45 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  localhost  csi-node-driver-mz55k eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali32f8b8780c0  [] []}} ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Namespace="calico-system" Pod="csi-node-driver-mz55k" WorkloadEndpoint="localhost-k8s-csi--node--driver--mz55k-"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:57.356 [INFO][4247] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Namespace="calico-system" Pod="csi-node-driver-mz55k" WorkloadEndpoint="localhost-k8s-csi--node--driver--mz55k-eth0"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:57.576 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" HandleID="k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Workload="localhost-k8s-csi--node--driver--mz55k-eth0"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:57.599 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" HandleID="k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Workload="localhost-k8s-csi--node--driver--mz55k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002da9f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mz55k", "timestamp":"2025-01-13 20:32:57.576838482 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:57.599 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.063 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.063 [INFO][4313] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost'
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.110 [INFO][4313] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.120 [INFO][4313] ipam/ipam.go 372: Looking up existing affinities for host host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.131 [INFO][4313] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.133 [INFO][4313] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.137 [INFO][4313] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.137 [INFO][4313] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.141 [INFO][4313] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.153 [INFO][4313] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.164 [INFO][4313] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.164 [INFO][4313] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" host="localhost"
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.164 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 13 20:32:58.222750 containerd[1468]: 2025-01-13 20:32:58.164 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" HandleID="k8s-pod-network.80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Workload="localhost-k8s-csi--node--driver--mz55k-eth0"
Jan 13 20:32:58.223323 containerd[1468]: 2025-01-13 20:32:58.172 [INFO][4247] cni-plugin/k8s.go 386: Populated endpoint ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Namespace="calico-system" Pod="csi-node-driver-mz55k" WorkloadEndpoint="localhost-k8s-csi--node--driver--mz55k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mz55k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ccd530c-1ce4-41fc-b0fc-1d9142439edd", ResourceVersion:"571", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mz55k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32f8b8780c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:58.223323 containerd[1468]: 2025-01-13 20:32:58.173 [INFO][4247] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Namespace="calico-system" Pod="csi-node-driver-mz55k" WorkloadEndpoint="localhost-k8s-csi--node--driver--mz55k-eth0"
Jan 13 20:32:58.223323 containerd[1468]: 2025-01-13 20:32:58.173 [INFO][4247] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32f8b8780c0 ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Namespace="calico-system" Pod="csi-node-driver-mz55k" WorkloadEndpoint="localhost-k8s-csi--node--driver--mz55k-eth0"
Jan 13 20:32:58.223323 containerd[1468]: 2025-01-13 20:32:58.180 [INFO][4247] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Namespace="calico-system" Pod="csi-node-driver-mz55k" WorkloadEndpoint="localhost-k8s-csi--node--driver--mz55k-eth0"
Jan 13 20:32:58.223323 containerd[1468]: 2025-01-13 20:32:58.186 [INFO][4247] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Namespace="calico-system" Pod="csi-node-driver-mz55k" WorkloadEndpoint="localhost-k8s-csi--node--driver--mz55k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mz55k-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1ccd530c-1ce4-41fc-b0fc-1d9142439edd", ResourceVersion:"571", Generation:0, CreationTimestamp:time.Date(2025, time.January, 13, 20, 32, 45, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052", Pod:"csi-node-driver-mz55k", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali32f8b8780c0", MAC:"6e:7c:96:f1:79:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 13 20:32:58.223323 containerd[1468]: 2025-01-13 20:32:58.219 [INFO][4247] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052" Namespace="calico-system" Pod="csi-node-driver-mz55k" WorkloadEndpoint="localhost-k8s-csi--node--driver--mz55k-eth0"
Jan 13 20:32:58.245789 kubelet[2539]: E0113 20:32:58.245080    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:58.248857 containerd[1468]: time="2025-01-13T20:32:58.248820025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-gfjnm,Uid:956a79b0-4c28-4303-8930-015d57ee6a8d,Namespace:kube-system,Attempt:4,} returns sandbox id \"3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2\""
Jan 13 20:32:58.249781 kubelet[2539]: E0113 20:32:58.249741    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:58.249858 kubelet[2539]: I0113 20:32:58.249812    2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:32:58.250567 kubelet[2539]: E0113 20:32:58.250433    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:58.252325 containerd[1468]: time="2025-01-13T20:32:58.252293590Z" level=info msg="CreateContainer within sandbox \"3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}"
Jan 13 20:32:58.272189 containerd[1468]: time="2025-01-13T20:32:58.272089073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 13 20:32:58.272746 containerd[1468]: time="2025-01-13T20:32:58.272697088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 13 20:32:58.272746 containerd[1468]: time="2025-01-13T20:32:58.272722968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:58.272983 containerd[1468]: time="2025-01-13T20:32:58.272813051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 13 20:32:58.298134 systemd[1]: Started cri-containerd-80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052.scope - libcontainer container 80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052.
Jan 13 20:32:58.311953 kubelet[2539]: I0113 20:32:58.311826    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-2rhs8" podStartSLOduration=20.311804883 podStartE2EDuration="20.311804883s" podCreationTimestamp="2025-01-13 20:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:58.278963201 +0000 UTC m=+26.573827854" watchObservedRunningTime="2025-01-13 20:32:58.311804883 +0000 UTC m=+26.606669536"
Jan 13 20:32:58.312282 containerd[1468]: time="2025-01-13T20:32:58.312004807Z" level=info msg="CreateContainer within sandbox \"3ff618a9d3cc8bdb7687b635280b22d5a82951010bcf951cbc672a7138009aa2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8acfdc71f78a45f5438fbf4aa346987c056adc6a5f190df37c7fa1fabfa8e4a5\""
Jan 13 20:32:58.314685 containerd[1468]: time="2025-01-13T20:32:58.314629351Z" level=info msg="StartContainer for \"8acfdc71f78a45f5438fbf4aa346987c056adc6a5f190df37c7fa1fabfa8e4a5\""
Jan 13 20:32:58.318576 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 13 20:32:58.365242 containerd[1468]: time="2025-01-13T20:32:58.365195826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mz55k,Uid:1ccd530c-1ce4-41fc-b0fc-1d9142439edd,Namespace:calico-system,Attempt:4,} returns sandbox id \"80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052\""
Jan 13 20:32:58.370139 systemd[1]: Started cri-containerd-8acfdc71f78a45f5438fbf4aa346987c056adc6a5f190df37c7fa1fabfa8e4a5.scope - libcontainer container 8acfdc71f78a45f5438fbf4aa346987c056adc6a5f190df37c7fa1fabfa8e4a5.
Jan 13 20:32:58.396896 containerd[1468]: time="2025-01-13T20:32:58.396772797Z" level=info msg="StartContainer for \"8acfdc71f78a45f5438fbf4aa346987c056adc6a5f190df37c7fa1fabfa8e4a5\" returns successfully"
Jan 13 20:32:58.413956 kernel: bpftool[4872]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Jan 13 20:32:58.568014 systemd-networkd[1393]: vxlan.calico: Link UP
Jan 13 20:32:58.568021 systemd-networkd[1393]: vxlan.calico: Gained carrier
Jan 13 20:32:58.873210 systemd-networkd[1393]: cali4ae22ec1c07: Gained IPv6LL
Jan 13 20:32:59.161543 containerd[1468]: time="2025-01-13T20:32:59.161422667Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:59.162223 containerd[1468]: time="2025-01-13T20:32:59.161937719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409"
Jan 13 20:32:59.162962 containerd[1468]: time="2025-01-13T20:32:59.162908302Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:59.165382 containerd[1468]: time="2025-01-13T20:32:59.165342200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:32:59.166364 containerd[1468]: time="2025-01-13T20:32:59.165809371Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 1.333109479s"
Jan 13 20:32:59.166364 containerd[1468]: time="2025-01-13T20:32:59.165836091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\""
Jan 13 20:32:59.168662 containerd[1468]: time="2025-01-13T20:32:59.168634358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\""
Jan 13 20:32:59.170785 containerd[1468]: time="2025-01-13T20:32:59.170754808Z" level=info msg="CreateContainer within sandbox \"11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Jan 13 20:32:59.182014 containerd[1468]: time="2025-01-13T20:32:59.181978073Z" level=info msg="CreateContainer within sandbox \"11571b725e23c1b567941ad1ad9bd5a53df11ca23b28317b3d9f00f89df0674c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"31fd9266db1f42b6a1ebbb40c35967f50de292ee68afc9930a695437d5690a1e\""
Jan 13 20:32:59.182809 containerd[1468]: time="2025-01-13T20:32:59.182747372Z" level=info msg="StartContainer for \"31fd9266db1f42b6a1ebbb40c35967f50de292ee68afc9930a695437d5690a1e\""
Jan 13 20:32:59.235109 systemd[1]: Started cri-containerd-31fd9266db1f42b6a1ebbb40c35967f50de292ee68afc9930a695437d5690a1e.scope - libcontainer container 31fd9266db1f42b6a1ebbb40c35967f50de292ee68afc9930a695437d5690a1e.
Jan 13 20:32:59.258387 systemd-networkd[1393]: cali84104efaa12: Gained IPv6LL
Jan 13 20:32:59.260430 kubelet[2539]: E0113 20:32:59.260387    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:59.261539 kubelet[2539]: E0113 20:32:59.260959    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:32:59.271506 kubelet[2539]: I0113 20:32:59.270833    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-gfjnm" podStartSLOduration=21.270766015 podStartE2EDuration="21.270766015s" podCreationTimestamp="2025-01-13 20:32:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-13 20:32:59.270649652 +0000 UTC m=+27.565514345" watchObservedRunningTime="2025-01-13 20:32:59.270766015 +0000 UTC m=+27.565630668"
Jan 13 20:32:59.277428 containerd[1468]: time="2025-01-13T20:32:59.277374892Z" level=info msg="StartContainer for \"31fd9266db1f42b6a1ebbb40c35967f50de292ee68afc9930a695437d5690a1e\" returns successfully"
Jan 13 20:32:59.322892 systemd-networkd[1393]: cali035a3edb824: Gained IPv6LL
Jan 13 20:32:59.450527 systemd-networkd[1393]: cali32f8b8780c0: Gained IPv6LL
Jan 13 20:32:59.577551 systemd-networkd[1393]: cali0f2da388d8c: Gained IPv6LL
Jan 13 20:32:59.641520 systemd-networkd[1393]: cali7b2edad2543: Gained IPv6LL
Jan 13 20:32:59.897075 systemd-networkd[1393]: vxlan.calico: Gained IPv6LL
Jan 13 20:33:00.058709 systemd[1]: Started sshd@7-10.0.0.151:22-10.0.0.1:50470.service - OpenSSH per-connection server daemon (10.0.0.1:50470).
Jan 13 20:33:00.128668 sshd[5013]: Accepted publickey for core from 10.0.0.1 port 50470 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:00.131127 sshd-session[5013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:00.137837 systemd-logind[1456]: New session 8 of user core.
Jan 13 20:33:00.144092 systemd[1]: Started session-8.scope - Session 8 of User core.
Jan 13 20:33:00.268398 kubelet[2539]: E0113 20:33:00.268259    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:33:00.272136 kubelet[2539]: E0113 20:33:00.270090    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:33:00.282679 kubelet[2539]: I0113 20:33:00.282626    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69b5874dc7-pn7tk" podStartSLOduration=12.946443379 podStartE2EDuration="14.282611731s" podCreationTimestamp="2025-01-13 20:32:46 +0000 UTC" firstStartedPulling="2025-01-13 20:32:57.83223768 +0000 UTC m=+26.127102293" lastFinishedPulling="2025-01-13 20:32:59.168405992 +0000 UTC m=+27.463270645" observedRunningTime="2025-01-13 20:33:00.28172399 +0000 UTC m=+28.576588643" watchObservedRunningTime="2025-01-13 20:33:00.282611731 +0000 UTC m=+28.577476384"
Jan 13 20:33:00.441029 sshd[5019]: Connection closed by 10.0.0.1 port 50470
Jan 13 20:33:00.441102 sshd-session[5013]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:00.444979 systemd[1]: sshd@7-10.0.0.151:22-10.0.0.1:50470.service: Deactivated successfully.
Jan 13 20:33:00.448584 systemd[1]: session-8.scope: Deactivated successfully.
Jan 13 20:33:00.449856 systemd-logind[1456]: Session 8 logged out. Waiting for processes to exit.
Jan 13 20:33:00.455369 systemd-logind[1456]: Removed session 8.
Jan 13 20:33:00.631807 containerd[1468]: time="2025-01-13T20:33:00.631180699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:00.631807 containerd[1468]: time="2025-01-13T20:33:00.631660430Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828"
Jan 13 20:33:00.632913 containerd[1468]: time="2025-01-13T20:33:00.632879738Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:00.634729 containerd[1468]: time="2025-01-13T20:33:00.634692660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:00.635851 containerd[1468]: time="2025-01-13T20:33:00.635820645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 1.466723397s"
Jan 13 20:33:00.635851 containerd[1468]: time="2025-01-13T20:33:00.635850886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\""
Jan 13 20:33:00.637265 containerd[1468]: time="2025-01-13T20:33:00.636807908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\""
Jan 13 20:33:00.650033 containerd[1468]: time="2025-01-13T20:33:00.649997931Z" level=info msg="CreateContainer within sandbox \"1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}"
Jan 13 20:33:00.661031 containerd[1468]: time="2025-01-13T20:33:00.660175925Z" level=info msg="CreateContainer within sandbox \"1eca0c82608038e89b7fd3ab81af783fc344dae9e6db5b1eed8bdd7e3b0e6401\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a44ed634fc2ba179684423a3dd53b726d757ab34d5378d4b08060c41b38dd39e\""
Jan 13 20:33:00.662868 containerd[1468]: time="2025-01-13T20:33:00.661695040Z" level=info msg="StartContainer for \"a44ed634fc2ba179684423a3dd53b726d757ab34d5378d4b08060c41b38dd39e\""
Jan 13 20:33:00.696093 systemd[1]: Started cri-containerd-a44ed634fc2ba179684423a3dd53b726d757ab34d5378d4b08060c41b38dd39e.scope - libcontainer container a44ed634fc2ba179684423a3dd53b726d757ab34d5378d4b08060c41b38dd39e.
Jan 13 20:33:00.755375 containerd[1468]: time="2025-01-13T20:33:00.755309071Z" level=info msg="StartContainer for \"a44ed634fc2ba179684423a3dd53b726d757ab34d5378d4b08060c41b38dd39e\" returns successfully"
Jan 13 20:33:00.863664 containerd[1468]: time="2025-01-13T20:33:00.863601199Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:00.864479 containerd[1468]: time="2025-01-13T20:33:00.864340776Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77"
Jan 13 20:33:00.872986 containerd[1468]: time="2025-01-13T20:33:00.872401801Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 235.315766ms"
Jan 13 20:33:00.872986 containerd[1468]: time="2025-01-13T20:33:00.872443402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\""
Jan 13 20:33:00.873914 containerd[1468]: time="2025-01-13T20:33:00.873875595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Jan 13 20:33:00.882180 containerd[1468]: time="2025-01-13T20:33:00.882071423Z" level=info msg="CreateContainer within sandbox \"a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}"
Jan 13 20:33:00.892883 containerd[1468]: time="2025-01-13T20:33:00.892839510Z" level=info msg="CreateContainer within sandbox \"a6273fe59ef4ab40301fe74b9f24d82b2129c926482da74b849220fb4f424437\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a2f130d3af495b0b6081eab3059f61c64d9d219f847e50a871e35121d6705dc7\""
Jan 13 20:33:00.894131 containerd[1468]: time="2025-01-13T20:33:00.893982177Z" level=info msg="StartContainer for \"a2f130d3af495b0b6081eab3059f61c64d9d219f847e50a871e35121d6705dc7\""
Jan 13 20:33:00.919113 systemd[1]: Started cri-containerd-a2f130d3af495b0b6081eab3059f61c64d9d219f847e50a871e35121d6705dc7.scope - libcontainer container a2f130d3af495b0b6081eab3059f61c64d9d219f847e50a871e35121d6705dc7.
Jan 13 20:33:00.954095 containerd[1468]: time="2025-01-13T20:33:00.954010476Z" level=info msg="StartContainer for \"a2f130d3af495b0b6081eab3059f61c64d9d219f847e50a871e35121d6705dc7\" returns successfully"
Jan 13 20:33:01.274438 kubelet[2539]: I0113 20:33:01.274407    2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:33:01.274931 kubelet[2539]: E0113 20:33:01.274900    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:33:01.285632 kubelet[2539]: I0113 20:33:01.285526    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69b5874dc7-m22rh" podStartSLOduration=12.615263792 podStartE2EDuration="15.285512106s" podCreationTimestamp="2025-01-13 20:32:46 +0000 UTC" firstStartedPulling="2025-01-13 20:32:58.203462797 +0000 UTC m=+26.498327450" lastFinishedPulling="2025-01-13 20:33:00.873711151 +0000 UTC m=+29.168575764" observedRunningTime="2025-01-13 20:33:01.284881612 +0000 UTC m=+29.579746265" watchObservedRunningTime="2025-01-13 20:33:01.285512106 +0000 UTC m=+29.580376759"
Jan 13 20:33:01.309032 kubelet[2539]: I0113 20:33:01.308911    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8574fbbb74-f4267" podStartSLOduration=13.66899971 podStartE2EDuration="16.308893747s" podCreationTimestamp="2025-01-13 20:32:45 +0000 UTC" firstStartedPulling="2025-01-13 20:32:57.996755427 +0000 UTC m=+26.291620080" lastFinishedPulling="2025-01-13 20:33:00.636649464 +0000 UTC m=+28.931514117" observedRunningTime="2025-01-13 20:33:01.308178732 +0000 UTC m=+29.603043385" watchObservedRunningTime="2025-01-13 20:33:01.308893747 +0000 UTC m=+29.603758400"
Jan 13 20:33:01.815841 containerd[1468]: time="2025-01-13T20:33:01.815792942Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:01.816340 containerd[1468]: time="2025-01-13T20:33:01.816275233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730"
Jan 13 20:33:01.817176 containerd[1468]: time="2025-01-13T20:33:01.817140812Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:01.819792 containerd[1468]: time="2025-01-13T20:33:01.819762190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:01.821027 containerd[1468]: time="2025-01-13T20:33:01.820998418Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 947.085263ms"
Jan 13 20:33:01.821094 containerd[1468]: time="2025-01-13T20:33:01.821030699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\""
Jan 13 20:33:01.823568 containerd[1468]: time="2025-01-13T20:33:01.823520034Z" level=info msg="CreateContainer within sandbox \"80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Jan 13 20:33:01.838170 containerd[1468]: time="2025-01-13T20:33:01.837998837Z" level=info msg="CreateContainer within sandbox \"80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"97fa330258ba13e9c6bfa644fa46b45dc5cfac8156e22d9c1dadf208d702ffee\""
Jan 13 20:33:01.839019 containerd[1468]: time="2025-01-13T20:33:01.838794055Z" level=info msg="StartContainer for \"97fa330258ba13e9c6bfa644fa46b45dc5cfac8156e22d9c1dadf208d702ffee\""
Jan 13 20:33:01.889094 systemd[1]: Started cri-containerd-97fa330258ba13e9c6bfa644fa46b45dc5cfac8156e22d9c1dadf208d702ffee.scope - libcontainer container 97fa330258ba13e9c6bfa644fa46b45dc5cfac8156e22d9c1dadf208d702ffee.
Jan 13 20:33:01.922204 containerd[1468]: time="2025-01-13T20:33:01.921459100Z" level=info msg="StartContainer for \"97fa330258ba13e9c6bfa644fa46b45dc5cfac8156e22d9c1dadf208d702ffee\" returns successfully"
Jan 13 20:33:01.926492 containerd[1468]: time="2025-01-13T20:33:01.926348449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Jan 13 20:33:02.278814 kubelet[2539]: I0113 20:33:02.278790    2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:33:02.279558 kubelet[2539]: E0113 20:33:02.279144    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:33:02.791544 containerd[1468]: time="2025-01-13T20:33:02.791502716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:02.792031 containerd[1468]: time="2025-01-13T20:33:02.791987566Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368"
Jan 13 20:33:02.793014 containerd[1468]: time="2025-01-13T20:33:02.792976468Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:02.795108 containerd[1468]: time="2025-01-13T20:33:02.795079713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\"  labels:{key:\"io.cri-containerd.image\"  value:\"managed\"}"
Jan 13 20:33:02.795894 containerd[1468]: time="2025-01-13T20:33:02.795861810Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 869.47924ms"
Jan 13 20:33:02.795977 containerd[1468]: time="2025-01-13T20:33:02.795892491Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\""
Jan 13 20:33:02.797787 containerd[1468]: time="2025-01-13T20:33:02.797758011Z" level=info msg="CreateContainer within sandbox \"80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Jan 13 20:33:02.811825 containerd[1468]: time="2025-01-13T20:33:02.811750355Z" level=info msg="CreateContainer within sandbox \"80a0bd3be67f4e7ed197d24891801f0c95bead2f155217bef997df859b799052\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"9572b4a13b78a8efa94583a83add64231f2c945680bf5cb3011644c1d2939f5a\""
Jan 13 20:33:02.812321 containerd[1468]: time="2025-01-13T20:33:02.812271087Z" level=info msg="StartContainer for \"9572b4a13b78a8efa94583a83add64231f2c945680bf5cb3011644c1d2939f5a\""
Jan 13 20:33:02.847095 systemd[1]: Started cri-containerd-9572b4a13b78a8efa94583a83add64231f2c945680bf5cb3011644c1d2939f5a.scope - libcontainer container 9572b4a13b78a8efa94583a83add64231f2c945680bf5cb3011644c1d2939f5a.
Jan 13 20:33:02.871594 containerd[1468]: time="2025-01-13T20:33:02.871474252Z" level=info msg="StartContainer for \"9572b4a13b78a8efa94583a83add64231f2c945680bf5cb3011644c1d2939f5a\" returns successfully"
Jan 13 20:33:03.302253 kubelet[2539]: I0113 20:33:03.301891    2539 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mz55k" podStartSLOduration=13.873416534 podStartE2EDuration="18.301872102s" podCreationTimestamp="2025-01-13 20:32:45 +0000 UTC" firstStartedPulling="2025-01-13 20:32:58.368144858 +0000 UTC m=+26.663009511" lastFinishedPulling="2025-01-13 20:33:02.796600426 +0000 UTC m=+31.091465079" observedRunningTime="2025-01-13 20:33:03.301759779 +0000 UTC m=+31.596624432" watchObservedRunningTime="2025-01-13 20:33:03.301872102 +0000 UTC m=+31.596736755"
Jan 13 20:33:03.866651 kubelet[2539]: I0113 20:33:03.866598    2539 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Jan 13 20:33:03.869119 kubelet[2539]: I0113 20:33:03.869084    2539 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Jan 13 20:33:05.451512 systemd[1]: Started sshd@8-10.0.0.151:22-10.0.0.1:34206.service - OpenSSH per-connection server daemon (10.0.0.1:34206).
Jan 13 20:33:05.509685 sshd[5226]: Accepted publickey for core from 10.0.0.1 port 34206 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:05.511142 sshd-session[5226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:05.515077 systemd-logind[1456]: New session 9 of user core.
Jan 13 20:33:05.531119 systemd[1]: Started session-9.scope - Session 9 of User core.
Jan 13 20:33:05.727975 sshd[5228]: Connection closed by 10.0.0.1 port 34206
Jan 13 20:33:05.728238 sshd-session[5226]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:05.731481 systemd[1]: sshd@8-10.0.0.151:22-10.0.0.1:34206.service: Deactivated successfully.
Jan 13 20:33:05.733559 systemd[1]: session-9.scope: Deactivated successfully.
Jan 13 20:33:05.734462 systemd-logind[1456]: Session 9 logged out. Waiting for processes to exit.
Jan 13 20:33:05.735570 systemd-logind[1456]: Removed session 9.
Jan 13 20:33:10.742413 systemd[1]: Started sshd@9-10.0.0.151:22-10.0.0.1:34218.service - OpenSSH per-connection server daemon (10.0.0.1:34218).
Jan 13 20:33:10.783633 sshd[5251]: Accepted publickey for core from 10.0.0.1 port 34218 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:10.784946 sshd-session[5251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:10.789363 systemd-logind[1456]: New session 10 of user core.
Jan 13 20:33:10.805101 systemd[1]: Started session-10.scope - Session 10 of User core.
Jan 13 20:33:10.971461 sshd[5253]: Connection closed by 10.0.0.1 port 34218
Jan 13 20:33:10.971966 sshd-session[5251]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:10.980435 systemd[1]: sshd@9-10.0.0.151:22-10.0.0.1:34218.service: Deactivated successfully.
Jan 13 20:33:10.981898 systemd[1]: session-10.scope: Deactivated successfully.
Jan 13 20:33:10.983158 systemd-logind[1456]: Session 10 logged out. Waiting for processes to exit.
Jan 13 20:33:10.993456 systemd[1]: Started sshd@10-10.0.0.151:22-10.0.0.1:34224.service - OpenSSH per-connection server daemon (10.0.0.1:34224).
Jan 13 20:33:10.994575 systemd-logind[1456]: Removed session 10.
Jan 13 20:33:11.029719 sshd[5267]: Accepted publickey for core from 10.0.0.1 port 34224 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:11.031018 sshd-session[5267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:11.035008 systemd-logind[1456]: New session 11 of user core.
Jan 13 20:33:11.045067 systemd[1]: Started session-11.scope - Session 11 of User core.
Jan 13 20:33:11.253454 sshd[5269]: Connection closed by 10.0.0.1 port 34224
Jan 13 20:33:11.254001 sshd-session[5267]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:11.263729 systemd[1]: sshd@10-10.0.0.151:22-10.0.0.1:34224.service: Deactivated successfully.
Jan 13 20:33:11.267098 systemd[1]: session-11.scope: Deactivated successfully.
Jan 13 20:33:11.269668 systemd-logind[1456]: Session 11 logged out. Waiting for processes to exit.
Jan 13 20:33:11.277283 systemd[1]: Started sshd@11-10.0.0.151:22-10.0.0.1:34228.service - OpenSSH per-connection server daemon (10.0.0.1:34228).
Jan 13 20:33:11.278620 systemd-logind[1456]: Removed session 11.
Jan 13 20:33:11.315230 sshd[5279]: Accepted publickey for core from 10.0.0.1 port 34228 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:11.316496 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:11.320279 systemd-logind[1456]: New session 12 of user core.
Jan 13 20:33:11.324120 systemd[1]: Started session-12.scope - Session 12 of User core.
Jan 13 20:33:11.466777 sshd[5281]: Connection closed by 10.0.0.1 port 34228
Jan 13 20:33:11.467173 sshd-session[5279]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:11.470591 systemd[1]: sshd@11-10.0.0.151:22-10.0.0.1:34228.service: Deactivated successfully.
Jan 13 20:33:11.473546 systemd[1]: session-12.scope: Deactivated successfully.
Jan 13 20:33:11.474168 systemd-logind[1456]: Session 12 logged out. Waiting for processes to exit.
Jan 13 20:33:11.475489 systemd-logind[1456]: Removed session 12.
Jan 13 20:33:15.849228 kubelet[2539]: I0113 20:33:15.849172    2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:33:15.849622 kubelet[2539]: E0113 20:33:15.849570    2539 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 13 20:33:16.480792 systemd[1]: Started sshd@12-10.0.0.151:22-10.0.0.1:32886.service - OpenSSH per-connection server daemon (10.0.0.1:32886).
Jan 13 20:33:16.521262 sshd[5344]: Accepted publickey for core from 10.0.0.1 port 32886 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:16.522727 sshd-session[5344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:16.526692 systemd-logind[1456]: New session 13 of user core.
Jan 13 20:33:16.544079 systemd[1]: Started session-13.scope - Session 13 of User core.
Jan 13 20:33:16.735325 sshd[5346]: Connection closed by 10.0.0.1 port 32886
Jan 13 20:33:16.735904 sshd-session[5344]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:16.739289 systemd[1]: sshd@12-10.0.0.151:22-10.0.0.1:32886.service: Deactivated successfully.
Jan 13 20:33:16.741943 systemd[1]: session-13.scope: Deactivated successfully.
Jan 13 20:33:16.742974 systemd-logind[1456]: Session 13 logged out. Waiting for processes to exit.
Jan 13 20:33:16.744297 systemd-logind[1456]: Removed session 13.
Jan 13 20:33:21.751254 systemd[1]: Started sshd@13-10.0.0.151:22-10.0.0.1:32888.service - OpenSSH per-connection server daemon (10.0.0.1:32888).
Jan 13 20:33:21.793737 sshd[5387]: Accepted publickey for core from 10.0.0.1 port 32888 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:21.795276 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:21.798807 systemd-logind[1456]: New session 14 of user core.
Jan 13 20:33:21.808063 systemd[1]: Started session-14.scope - Session 14 of User core.
Jan 13 20:33:21.957355 sshd[5389]: Connection closed by 10.0.0.1 port 32888
Jan 13 20:33:21.957883 sshd-session[5387]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:21.964355 systemd[1]: sshd@13-10.0.0.151:22-10.0.0.1:32888.service: Deactivated successfully.
Jan 13 20:33:21.965780 systemd[1]: session-14.scope: Deactivated successfully.
Jan 13 20:33:21.968109 systemd-logind[1456]: Session 14 logged out. Waiting for processes to exit.
Jan 13 20:33:21.978178 systemd[1]: Started sshd@14-10.0.0.151:22-10.0.0.1:32900.service - OpenSSH per-connection server daemon (10.0.0.1:32900).
Jan 13 20:33:21.978950 systemd-logind[1456]: Removed session 14.
Jan 13 20:33:22.020043 sshd[5401]: Accepted publickey for core from 10.0.0.1 port 32900 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:22.021497 sshd-session[5401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:22.024986 systemd-logind[1456]: New session 15 of user core.
Jan 13 20:33:22.031067 systemd[1]: Started session-15.scope - Session 15 of User core.
Jan 13 20:33:22.251783 sshd[5403]: Connection closed by 10.0.0.1 port 32900
Jan 13 20:33:22.252525 sshd-session[5401]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:22.260412 systemd[1]: sshd@14-10.0.0.151:22-10.0.0.1:32900.service: Deactivated successfully.
Jan 13 20:33:22.261887 systemd[1]: session-15.scope: Deactivated successfully.
Jan 13 20:33:22.263221 systemd-logind[1456]: Session 15 logged out. Waiting for processes to exit.
Jan 13 20:33:22.272190 systemd[1]: Started sshd@15-10.0.0.151:22-10.0.0.1:32904.service - OpenSSH per-connection server daemon (10.0.0.1:32904).
Jan 13 20:33:22.273220 systemd-logind[1456]: Removed session 15.
Jan 13 20:33:22.311992 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 32904 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:22.313248 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:22.316789 systemd-logind[1456]: New session 16 of user core.
Jan 13 20:33:22.326075 systemd[1]: Started session-16.scope - Session 16 of User core.
Jan 13 20:33:23.725794 sshd[5415]: Connection closed by 10.0.0.1 port 32904
Jan 13 20:33:23.726672 sshd-session[5413]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:23.740500 systemd[1]: sshd@15-10.0.0.151:22-10.0.0.1:32904.service: Deactivated successfully.
Jan 13 20:33:23.745104 systemd[1]: session-16.scope: Deactivated successfully.
Jan 13 20:33:23.747075 systemd-logind[1456]: Session 16 logged out. Waiting for processes to exit.
Jan 13 20:33:23.752841 systemd[1]: Started sshd@16-10.0.0.151:22-10.0.0.1:55406.service - OpenSSH per-connection server daemon (10.0.0.1:55406).
Jan 13 20:33:23.753816 systemd-logind[1456]: Removed session 16.
Jan 13 20:33:23.794203 sshd[5439]: Accepted publickey for core from 10.0.0.1 port 55406 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:23.795472 sshd-session[5439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:23.799421 systemd-logind[1456]: New session 17 of user core.
Jan 13 20:33:23.803170 systemd[1]: Started session-17.scope - Session 17 of User core.
Jan 13 20:33:24.128376 sshd[5441]: Connection closed by 10.0.0.1 port 55406
Jan 13 20:33:24.127534 sshd-session[5439]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:24.139767 systemd[1]: sshd@16-10.0.0.151:22-10.0.0.1:55406.service: Deactivated successfully.
Jan 13 20:33:24.142368 systemd[1]: session-17.scope: Deactivated successfully.
Jan 13 20:33:24.144161 systemd-logind[1456]: Session 17 logged out. Waiting for processes to exit.
Jan 13 20:33:24.155236 systemd[1]: Started sshd@17-10.0.0.151:22-10.0.0.1:55422.service - OpenSSH per-connection server daemon (10.0.0.1:55422).
Jan 13 20:33:24.156491 systemd-logind[1456]: Removed session 17.
Jan 13 20:33:24.191783 sshd[5452]: Accepted publickey for core from 10.0.0.1 port 55422 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:24.192884 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:24.196862 systemd-logind[1456]: New session 18 of user core.
Jan 13 20:33:24.211105 systemd[1]: Started session-18.scope - Session 18 of User core.
Jan 13 20:33:24.359616 sshd[5454]: Connection closed by 10.0.0.1 port 55422
Jan 13 20:33:24.360304 sshd-session[5452]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:24.363440 systemd[1]: sshd@17-10.0.0.151:22-10.0.0.1:55422.service: Deactivated successfully.
Jan 13 20:33:24.365277 systemd[1]: session-18.scope: Deactivated successfully.
Jan 13 20:33:24.365871 systemd-logind[1456]: Session 18 logged out. Waiting for processes to exit.
Jan 13 20:33:24.366620 systemd-logind[1456]: Removed session 18.
Jan 13 20:33:29.374381 systemd[1]: Started sshd@18-10.0.0.151:22-10.0.0.1:55428.service - OpenSSH per-connection server daemon (10.0.0.1:55428).
Jan 13 20:33:29.414692 sshd[5492]: Accepted publickey for core from 10.0.0.1 port 55428 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:29.415775 sshd-session[5492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:29.419721 systemd-logind[1456]: New session 19 of user core.
Jan 13 20:33:29.432149 systemd[1]: Started session-19.scope - Session 19 of User core.
Jan 13 20:33:29.555017 sshd[5494]: Connection closed by 10.0.0.1 port 55428
Jan 13 20:33:29.555346 sshd-session[5492]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:29.558706 systemd[1]: sshd@18-10.0.0.151:22-10.0.0.1:55428.service: Deactivated successfully.
Jan 13 20:33:29.560367 systemd[1]: session-19.scope: Deactivated successfully.
Jan 13 20:33:29.560859 systemd-logind[1456]: Session 19 logged out. Waiting for processes to exit.
Jan 13 20:33:29.561619 systemd-logind[1456]: Removed session 19.
Jan 13 20:33:31.763860 containerd[1468]: time="2025-01-13T20:33:31.763824759Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\""
Jan 13 20:33:31.764472 containerd[1468]: time="2025-01-13T20:33:31.764383651Z" level=info msg="TearDown network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" successfully"
Jan 13 20:33:31.764472 containerd[1468]: time="2025-01-13T20:33:31.764403932Z" level=info msg="StopPodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" returns successfully"
Jan 13 20:33:31.764819 containerd[1468]: time="2025-01-13T20:33:31.764792380Z" level=info msg="RemovePodSandbox for \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\""
Jan 13 20:33:31.764852 containerd[1468]: time="2025-01-13T20:33:31.764823381Z" level=info msg="Forcibly stopping sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\""
Jan 13 20:33:31.764903 containerd[1468]: time="2025-01-13T20:33:31.764889743Z" level=info msg="TearDown network for sandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" successfully"
Jan 13 20:33:31.771953 containerd[1468]: time="2025-01-13T20:33:31.771895539Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.772157 containerd[1468]: time="2025-01-13T20:33:31.772054303Z" level=info msg="RemovePodSandbox \"2a8d29fb1359b46e9799e64a9f306066c6114265817d537da33b51b5aca49153\" returns successfully"
Jan 13 20:33:31.772515 containerd[1468]: time="2025-01-13T20:33:31.772477353Z" level=info msg="StopPodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\""
Jan 13 20:33:31.774999 containerd[1468]: time="2025-01-13T20:33:31.774949168Z" level=info msg="TearDown network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" successfully"
Jan 13 20:33:31.774999 containerd[1468]: time="2025-01-13T20:33:31.774981769Z" level=info msg="StopPodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" returns successfully"
Jan 13 20:33:31.776450 containerd[1468]: time="2025-01-13T20:33:31.775327296Z" level=info msg="RemovePodSandbox for \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\""
Jan 13 20:33:31.776450 containerd[1468]: time="2025-01-13T20:33:31.775355137Z" level=info msg="Forcibly stopping sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\""
Jan 13 20:33:31.776450 containerd[1468]: time="2025-01-13T20:33:31.775422458Z" level=info msg="TearDown network for sandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" successfully"
Jan 13 20:33:31.778129 containerd[1468]: time="2025-01-13T20:33:31.778093238Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.778254 containerd[1468]: time="2025-01-13T20:33:31.778234801Z" level=info msg="RemovePodSandbox \"9c46d1e1f1119a188ba9bca6d2f18e6cb004fee7d9a26a534332a533b904c01f\" returns successfully"
Jan 13 20:33:31.778593 containerd[1468]: time="2025-01-13T20:33:31.778571649Z" level=info msg="StopPodSandbox for \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\""
Jan 13 20:33:31.778672 containerd[1468]: time="2025-01-13T20:33:31.778657491Z" level=info msg="TearDown network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\" successfully"
Jan 13 20:33:31.778707 containerd[1468]: time="2025-01-13T20:33:31.778670771Z" level=info msg="StopPodSandbox for \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\" returns successfully"
Jan 13 20:33:31.778915 containerd[1468]: time="2025-01-13T20:33:31.778893096Z" level=info msg="RemovePodSandbox for \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\""
Jan 13 20:33:31.778963 containerd[1468]: time="2025-01-13T20:33:31.778943777Z" level=info msg="Forcibly stopping sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\""
Jan 13 20:33:31.779024 containerd[1468]: time="2025-01-13T20:33:31.779006659Z" level=info msg="TearDown network for sandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\" successfully"
Jan 13 20:33:31.781653 containerd[1468]: time="2025-01-13T20:33:31.781626557Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.781713 containerd[1468]: time="2025-01-13T20:33:31.781690319Z" level=info msg="RemovePodSandbox \"4f33082086425a6bd03f3c4d11ee2a9a91e37080b1b783cabdc6ac289975a899\" returns successfully"
Jan 13 20:33:31.782197 containerd[1468]: time="2025-01-13T20:33:31.782045647Z" level=info msg="StopPodSandbox for \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\""
Jan 13 20:33:31.782197 containerd[1468]: time="2025-01-13T20:33:31.782131929Z" level=info msg="TearDown network for sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\" successfully"
Jan 13 20:33:31.782197 containerd[1468]: time="2025-01-13T20:33:31.782142089Z" level=info msg="StopPodSandbox for \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\" returns successfully"
Jan 13 20:33:31.782955 containerd[1468]: time="2025-01-13T20:33:31.782538138Z" level=info msg="RemovePodSandbox for \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\""
Jan 13 20:33:31.782955 containerd[1468]: time="2025-01-13T20:33:31.782560498Z" level=info msg="Forcibly stopping sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\""
Jan 13 20:33:31.782955 containerd[1468]: time="2025-01-13T20:33:31.782627340Z" level=info msg="TearDown network for sandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\" successfully"
Jan 13 20:33:31.785139 containerd[1468]: time="2025-01-13T20:33:31.785112035Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.785199 containerd[1468]: time="2025-01-13T20:33:31.785165597Z" level=info msg="RemovePodSandbox \"6826b72c36be55efdf7544c31cb92c7bd8d69582dc54319a6565066208e6800d\" returns successfully"
Jan 13 20:33:31.785734 containerd[1468]: time="2025-01-13T20:33:31.785467163Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\""
Jan 13 20:33:31.785734 containerd[1468]: time="2025-01-13T20:33:31.785544045Z" level=info msg="TearDown network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" successfully"
Jan 13 20:33:31.785734 containerd[1468]: time="2025-01-13T20:33:31.785552965Z" level=info msg="StopPodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" returns successfully"
Jan 13 20:33:31.785832 containerd[1468]: time="2025-01-13T20:33:31.785796651Z" level=info msg="RemovePodSandbox for \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\""
Jan 13 20:33:31.785832 containerd[1468]: time="2025-01-13T20:33:31.785820091Z" level=info msg="Forcibly stopping sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\""
Jan 13 20:33:31.785962 containerd[1468]: time="2025-01-13T20:33:31.785881453Z" level=info msg="TearDown network for sandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" successfully"
Jan 13 20:33:31.788166 containerd[1468]: time="2025-01-13T20:33:31.788128103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.788354 containerd[1468]: time="2025-01-13T20:33:31.788174544Z" level=info msg="RemovePodSandbox \"fb1a52e546c74327b879443ffdc1b6e9a34008fea5cd189b80d312803f2bce7e\" returns successfully"
Jan 13 20:33:31.788557 containerd[1468]: time="2025-01-13T20:33:31.788532712Z" level=info msg="StopPodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\""
Jan 13 20:33:31.788797 containerd[1468]: time="2025-01-13T20:33:31.788707916Z" level=info msg="TearDown network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" successfully"
Jan 13 20:33:31.788797 containerd[1468]: time="2025-01-13T20:33:31.788724276Z" level=info msg="StopPodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" returns successfully"
Jan 13 20:33:31.789015 containerd[1468]: time="2025-01-13T20:33:31.788984482Z" level=info msg="RemovePodSandbox for \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\""
Jan 13 20:33:31.789015 containerd[1468]: time="2025-01-13T20:33:31.789008443Z" level=info msg="Forcibly stopping sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\""
Jan 13 20:33:31.789091 containerd[1468]: time="2025-01-13T20:33:31.789072964Z" level=info msg="TearDown network for sandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" successfully"
Jan 13 20:33:31.792166 containerd[1468]: time="2025-01-13T20:33:31.792011470Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.792166 containerd[1468]: time="2025-01-13T20:33:31.792106152Z" level=info msg="RemovePodSandbox \"6977718389673ffa34f94051255432a5ee2d659cfc51e6ec585e2155c236362a\" returns successfully"
Jan 13 20:33:31.793421 containerd[1468]: time="2025-01-13T20:33:31.793350020Z" level=info msg="StopPodSandbox for \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\""
Jan 13 20:33:31.793590 containerd[1468]: time="2025-01-13T20:33:31.793553545Z" level=info msg="TearDown network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\" successfully"
Jan 13 20:33:31.793669 containerd[1468]: time="2025-01-13T20:33:31.793633626Z" level=info msg="StopPodSandbox for \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\" returns successfully"
Jan 13 20:33:31.794169 containerd[1468]: time="2025-01-13T20:33:31.794145798Z" level=info msg="RemovePodSandbox for \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\""
Jan 13 20:33:31.794242 containerd[1468]: time="2025-01-13T20:33:31.794180359Z" level=info msg="Forcibly stopping sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\""
Jan 13 20:33:31.794306 containerd[1468]: time="2025-01-13T20:33:31.794288081Z" level=info msg="TearDown network for sandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\" successfully"
Jan 13 20:33:31.797086 containerd[1468]: time="2025-01-13T20:33:31.797054463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.797158 containerd[1468]: time="2025-01-13T20:33:31.797112544Z" level=info msg="RemovePodSandbox \"c60e04b971ad8b6ab716024fbbd0951648e672e34c310ac5f4b66861370c860b\" returns successfully"
Jan 13 20:33:31.797574 containerd[1468]: time="2025-01-13T20:33:31.797428431Z" level=info msg="StopPodSandbox for \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\""
Jan 13 20:33:31.803946 containerd[1468]: time="2025-01-13T20:33:31.803862455Z" level=info msg="TearDown network for sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\" successfully"
Jan 13 20:33:31.803946 containerd[1468]: time="2025-01-13T20:33:31.803891896Z" level=info msg="StopPodSandbox for \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\" returns successfully"
Jan 13 20:33:31.805007 containerd[1468]: time="2025-01-13T20:33:31.804217423Z" level=info msg="RemovePodSandbox for \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\""
Jan 13 20:33:31.805007 containerd[1468]: time="2025-01-13T20:33:31.804244384Z" level=info msg="Forcibly stopping sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\""
Jan 13 20:33:31.805007 containerd[1468]: time="2025-01-13T20:33:31.804304385Z" level=info msg="TearDown network for sandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\" successfully"
Jan 13 20:33:31.806952 containerd[1468]: time="2025-01-13T20:33:31.806911844Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.807012 containerd[1468]: time="2025-01-13T20:33:31.806971725Z" level=info msg="RemovePodSandbox \"204c61f511ac516da43c7fdbd2cd291490c17a76b4724e0d62f7a1b3d5c62ee3\" returns successfully"
Jan 13 20:33:31.807443 containerd[1468]: time="2025-01-13T20:33:31.807307093Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\""
Jan 13 20:33:31.807443 containerd[1468]: time="2025-01-13T20:33:31.807384454Z" level=info msg="TearDown network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" successfully"
Jan 13 20:33:31.807443 containerd[1468]: time="2025-01-13T20:33:31.807393614Z" level=info msg="StopPodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" returns successfully"
Jan 13 20:33:31.808243 containerd[1468]: time="2025-01-13T20:33:31.807683141Z" level=info msg="RemovePodSandbox for \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\""
Jan 13 20:33:31.808243 containerd[1468]: time="2025-01-13T20:33:31.807705221Z" level=info msg="Forcibly stopping sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\""
Jan 13 20:33:31.808243 containerd[1468]: time="2025-01-13T20:33:31.807776823Z" level=info msg="TearDown network for sandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" successfully"
Jan 13 20:33:31.810049 containerd[1468]: time="2025-01-13T20:33:31.810014313Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.810093 containerd[1468]: time="2025-01-13T20:33:31.810060274Z" level=info msg="RemovePodSandbox \"227014b4c404495432de99cad5b4d7041c29a5b1fa0a53c7fdad08f5eb812dc2\" returns successfully"
Jan 13 20:33:31.810497 containerd[1468]: time="2025-01-13T20:33:31.810345641Z" level=info msg="StopPodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\""
Jan 13 20:33:31.810497 containerd[1468]: time="2025-01-13T20:33:31.810423842Z" level=info msg="TearDown network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" successfully"
Jan 13 20:33:31.810497 containerd[1468]: time="2025-01-13T20:33:31.810433283Z" level=info msg="StopPodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" returns successfully"
Jan 13 20:33:31.810760 containerd[1468]: time="2025-01-13T20:33:31.810738409Z" level=info msg="RemovePodSandbox for \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\""
Jan 13 20:33:31.810792 containerd[1468]: time="2025-01-13T20:33:31.810774890Z" level=info msg="Forcibly stopping sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\""
Jan 13 20:33:31.810849 containerd[1468]: time="2025-01-13T20:33:31.810834852Z" level=info msg="TearDown network for sandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" successfully"
Jan 13 20:33:31.813395 containerd[1468]: time="2025-01-13T20:33:31.813362068Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.813468 containerd[1468]: time="2025-01-13T20:33:31.813434270Z" level=info msg="RemovePodSandbox \"b74bef2fae98d998e15814ed273657c2cbec573cd123ffc705d8c82518fdab91\" returns successfully"
Jan 13 20:33:31.813801 containerd[1468]: time="2025-01-13T20:33:31.813659355Z" level=info msg="StopPodSandbox for \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\""
Jan 13 20:33:31.813801 containerd[1468]: time="2025-01-13T20:33:31.813740637Z" level=info msg="TearDown network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\" successfully"
Jan 13 20:33:31.813801 containerd[1468]: time="2025-01-13T20:33:31.813749397Z" level=info msg="StopPodSandbox for \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\" returns successfully"
Jan 13 20:33:31.814037 containerd[1468]: time="2025-01-13T20:33:31.813989282Z" level=info msg="RemovePodSandbox for \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\""
Jan 13 20:33:31.814037 containerd[1468]: time="2025-01-13T20:33:31.814015003Z" level=info msg="Forcibly stopping sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\""
Jan 13 20:33:31.814152 containerd[1468]: time="2025-01-13T20:33:31.814074444Z" level=info msg="TearDown network for sandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\" successfully"
Jan 13 20:33:31.816358 containerd[1468]: time="2025-01-13T20:33:31.816329895Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.816410 containerd[1468]: time="2025-01-13T20:33:31.816377576Z" level=info msg="RemovePodSandbox \"b996ea66c3d596bd6c32c7a8855b207d5452ab0171ba32622b60b63f735bcdd0\" returns successfully"
Jan 13 20:33:31.816680 containerd[1468]: time="2025-01-13T20:33:31.816662142Z" level=info msg="StopPodSandbox for \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\""
Jan 13 20:33:31.816752 containerd[1468]: time="2025-01-13T20:33:31.816732824Z" level=info msg="TearDown network for sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\" successfully"
Jan 13 20:33:31.816752 containerd[1468]: time="2025-01-13T20:33:31.816745024Z" level=info msg="StopPodSandbox for \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\" returns successfully"
Jan 13 20:33:31.817558 containerd[1468]: time="2025-01-13T20:33:31.816992109Z" level=info msg="RemovePodSandbox for \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\""
Jan 13 20:33:31.817558 containerd[1468]: time="2025-01-13T20:33:31.817015070Z" level=info msg="Forcibly stopping sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\""
Jan 13 20:33:31.817558 containerd[1468]: time="2025-01-13T20:33:31.817084792Z" level=info msg="TearDown network for sandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\" successfully"
Jan 13 20:33:31.819565 containerd[1468]: time="2025-01-13T20:33:31.819524966Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.819608 containerd[1468]: time="2025-01-13T20:33:31.819573607Z" level=info msg="RemovePodSandbox \"4f60655957aa9a75401ad573647a08c741a7feef9989c6b75fd2228aef3c9806\" returns successfully"
Jan 13 20:33:31.819902 containerd[1468]: time="2025-01-13T20:33:31.819851013Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\""
Jan 13 20:33:31.819958 containerd[1468]: time="2025-01-13T20:33:31.819941215Z" level=info msg="TearDown network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" successfully"
Jan 13 20:33:31.819958 containerd[1468]: time="2025-01-13T20:33:31.819951136Z" level=info msg="StopPodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" returns successfully"
Jan 13 20:33:31.820237 containerd[1468]: time="2025-01-13T20:33:31.820206021Z" level=info msg="RemovePodSandbox for \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\""
Jan 13 20:33:31.820270 containerd[1468]: time="2025-01-13T20:33:31.820240142Z" level=info msg="Forcibly stopping sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\""
Jan 13 20:33:31.820319 containerd[1468]: time="2025-01-13T20:33:31.820303064Z" level=info msg="TearDown network for sandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" successfully"
Jan 13 20:33:31.822590 containerd[1468]: time="2025-01-13T20:33:31.822553954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.822637 containerd[1468]: time="2025-01-13T20:33:31.822604675Z" level=info msg="RemovePodSandbox \"2171bb1e1dd6639cc619c888993f6c37ba501d56796a45fb4a35b31eef705b9c\" returns successfully"
Jan 13 20:33:31.822897 containerd[1468]: time="2025-01-13T20:33:31.822878721Z" level=info msg="StopPodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\""
Jan 13 20:33:31.822986 containerd[1468]: time="2025-01-13T20:33:31.822971043Z" level=info msg="TearDown network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" successfully"
Jan 13 20:33:31.823019 containerd[1468]: time="2025-01-13T20:33:31.822984884Z" level=info msg="StopPodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" returns successfully"
Jan 13 20:33:31.823261 containerd[1468]: time="2025-01-13T20:33:31.823233329Z" level=info msg="RemovePodSandbox for \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\""
Jan 13 20:33:31.823261 containerd[1468]: time="2025-01-13T20:33:31.823257770Z" level=info msg="Forcibly stopping sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\""
Jan 13 20:33:31.823323 containerd[1468]: time="2025-01-13T20:33:31.823312091Z" level=info msg="TearDown network for sandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" successfully"
Jan 13 20:33:31.829290 containerd[1468]: time="2025-01-13T20:33:31.829249904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.829361 containerd[1468]: time="2025-01-13T20:33:31.829296345Z" level=info msg="RemovePodSandbox \"690bc1b2626d272c4e59557880da2cfe9cd245155628eec4d3173c850c242820\" returns successfully"
Jan 13 20:33:31.829619 containerd[1468]: time="2025-01-13T20:33:31.829590192Z" level=info msg="StopPodSandbox for \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\""
Jan 13 20:33:31.829692 containerd[1468]: time="2025-01-13T20:33:31.829673793Z" level=info msg="TearDown network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\" successfully"
Jan 13 20:33:31.829692 containerd[1468]: time="2025-01-13T20:33:31.829687914Z" level=info msg="StopPodSandbox for \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\" returns successfully"
Jan 13 20:33:31.829956 containerd[1468]: time="2025-01-13T20:33:31.829937279Z" level=info msg="RemovePodSandbox for \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\""
Jan 13 20:33:31.830002 containerd[1468]: time="2025-01-13T20:33:31.829958400Z" level=info msg="Forcibly stopping sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\""
Jan 13 20:33:31.830037 containerd[1468]: time="2025-01-13T20:33:31.830004361Z" level=info msg="TearDown network for sandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\" successfully"
Jan 13 20:33:31.832389 containerd[1468]: time="2025-01-13T20:33:31.832363254Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.832452 containerd[1468]: time="2025-01-13T20:33:31.832409215Z" level=info msg="RemovePodSandbox \"1e045b20982eb4edb8bcbc62d8bb5f2e18c80d6964a710161da81c59cd642336\" returns successfully"
Jan 13 20:33:31.832764 containerd[1468]: time="2025-01-13T20:33:31.832730342Z" level=info msg="StopPodSandbox for \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\""
Jan 13 20:33:31.832846 containerd[1468]: time="2025-01-13T20:33:31.832818664Z" level=info msg="TearDown network for sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\" successfully"
Jan 13 20:33:31.832846 containerd[1468]: time="2025-01-13T20:33:31.832833824Z" level=info msg="StopPodSandbox for \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\" returns successfully"
Jan 13 20:33:31.833143 containerd[1468]: time="2025-01-13T20:33:31.833121591Z" level=info msg="RemovePodSandbox for \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\""
Jan 13 20:33:31.833197 containerd[1468]: time="2025-01-13T20:33:31.833146831Z" level=info msg="Forcibly stopping sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\""
Jan 13 20:33:31.833228 containerd[1468]: time="2025-01-13T20:33:31.833204633Z" level=info msg="TearDown network for sandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\" successfully"
Jan 13 20:33:31.835823 containerd[1468]: time="2025-01-13T20:33:31.835788730Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.835887 containerd[1468]: time="2025-01-13T20:33:31.835864172Z" level=info msg="RemovePodSandbox \"0be7f04b7eb038af0328fab487455d648eaaec8e84c76b996cdb64c772e475f6\" returns successfully"
Jan 13 20:33:31.836192 containerd[1468]: time="2025-01-13T20:33:31.836161099Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\""
Jan 13 20:33:31.836263 containerd[1468]: time="2025-01-13T20:33:31.836247181Z" level=info msg="TearDown network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" successfully"
Jan 13 20:33:31.836263 containerd[1468]: time="2025-01-13T20:33:31.836260581Z" level=info msg="StopPodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" returns successfully"
Jan 13 20:33:31.836551 containerd[1468]: time="2025-01-13T20:33:31.836530987Z" level=info msg="RemovePodSandbox for \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\""
Jan 13 20:33:31.836593 containerd[1468]: time="2025-01-13T20:33:31.836555348Z" level=info msg="Forcibly stopping sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\""
Jan 13 20:33:31.836620 containerd[1468]: time="2025-01-13T20:33:31.836606509Z" level=info msg="TearDown network for sandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" successfully"
Jan 13 20:33:31.838807 containerd[1468]: time="2025-01-13T20:33:31.838771157Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.838867 containerd[1468]: time="2025-01-13T20:33:31.838823998Z" level=info msg="RemovePodSandbox \"d8d428451f0a685837713c470e2664b25b6bc2aa86d8ee5c05620efa28ca3f3c\" returns successfully"
Jan 13 20:33:31.839276 containerd[1468]: time="2025-01-13T20:33:31.839236288Z" level=info msg="StopPodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\""
Jan 13 20:33:31.839330 containerd[1468]: time="2025-01-13T20:33:31.839312449Z" level=info msg="TearDown network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" successfully"
Jan 13 20:33:31.839330 containerd[1468]: time="2025-01-13T20:33:31.839323170Z" level=info msg="StopPodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" returns successfully"
Jan 13 20:33:31.839563 containerd[1468]: time="2025-01-13T20:33:31.839524574Z" level=info msg="RemovePodSandbox for \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\""
Jan 13 20:33:31.839563 containerd[1468]: time="2025-01-13T20:33:31.839558735Z" level=info msg="Forcibly stopping sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\""
Jan 13 20:33:31.839630 containerd[1468]: time="2025-01-13T20:33:31.839616856Z" level=info msg="TearDown network for sandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" successfully"
Jan 13 20:33:31.841917 containerd[1468]: time="2025-01-13T20:33:31.841865466Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.841917 containerd[1468]: time="2025-01-13T20:33:31.841917108Z" level=info msg="RemovePodSandbox \"2ba4acd24d6eba8273dca26b97b48b62cc98f439908871df9ee12981c7a74f8f\" returns successfully"
Jan 13 20:33:31.842229 containerd[1468]: time="2025-01-13T20:33:31.842200234Z" level=info msg="StopPodSandbox for \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\""
Jan 13 20:33:31.842297 containerd[1468]: time="2025-01-13T20:33:31.842280516Z" level=info msg="TearDown network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\" successfully"
Jan 13 20:33:31.842297 containerd[1468]: time="2025-01-13T20:33:31.842294836Z" level=info msg="StopPodSandbox for \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\" returns successfully"
Jan 13 20:33:31.842620 containerd[1468]: time="2025-01-13T20:33:31.842570442Z" level=info msg="RemovePodSandbox for \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\""
Jan 13 20:33:31.842620 containerd[1468]: time="2025-01-13T20:33:31.842598763Z" level=info msg="Forcibly stopping sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\""
Jan 13 20:33:31.842685 containerd[1468]: time="2025-01-13T20:33:31.842659404Z" level=info msg="TearDown network for sandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\" successfully"
Jan 13 20:33:31.845197 containerd[1468]: time="2025-01-13T20:33:31.845160500Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.845240 containerd[1468]: time="2025-01-13T20:33:31.845209101Z" level=info msg="RemovePodSandbox \"98bf47fa53140be7d7fc7b0a4bfde2dd1ee28edfc19f866eafc53b6a5c680326\" returns successfully"
Jan 13 20:33:31.845522 containerd[1468]: time="2025-01-13T20:33:31.845481227Z" level=info msg="StopPodSandbox for \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\""
Jan 13 20:33:31.845568 containerd[1468]: time="2025-01-13T20:33:31.845559589Z" level=info msg="TearDown network for sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\" successfully"
Jan 13 20:33:31.845603 containerd[1468]: time="2025-01-13T20:33:31.845568509Z" level=info msg="StopPodSandbox for \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\" returns successfully"
Jan 13 20:33:31.845870 containerd[1468]: time="2025-01-13T20:33:31.845850916Z" level=info msg="RemovePodSandbox for \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\""
Jan 13 20:33:31.845913 containerd[1468]: time="2025-01-13T20:33:31.845872796Z" level=info msg="Forcibly stopping sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\""
Jan 13 20:33:31.845973 containerd[1468]: time="2025-01-13T20:33:31.845957158Z" level=info msg="TearDown network for sandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\" successfully"
Jan 13 20:33:31.848386 containerd[1468]: time="2025-01-13T20:33:31.848351092Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.848420 containerd[1468]: time="2025-01-13T20:33:31.848397693Z" level=info msg="RemovePodSandbox \"d14f87f138a958f619d2bd17054987a24270b539da165388919c0107fe6c996e\" returns successfully"
Jan 13 20:33:31.848689 containerd[1468]: time="2025-01-13T20:33:31.848664899Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\""
Jan 13 20:33:31.848759 containerd[1468]: time="2025-01-13T20:33:31.848743421Z" level=info msg="TearDown network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" successfully"
Jan 13 20:33:31.848759 containerd[1468]: time="2025-01-13T20:33:31.848757261Z" level=info msg="StopPodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" returns successfully"
Jan 13 20:33:31.849050 containerd[1468]: time="2025-01-13T20:33:31.849012427Z" level=info msg="RemovePodSandbox for \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\""
Jan 13 20:33:31.849050 containerd[1468]: time="2025-01-13T20:33:31.849039627Z" level=info msg="Forcibly stopping sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\""
Jan 13 20:33:31.849111 containerd[1468]: time="2025-01-13T20:33:31.849100469Z" level=info msg="TearDown network for sandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" successfully"
Jan 13 20:33:31.851298 containerd[1468]: time="2025-01-13T20:33:31.851263957Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.851419 containerd[1468]: time="2025-01-13T20:33:31.851312998Z" level=info msg="RemovePodSandbox \"af8f3d97f148ecdc6de1f17773435b7a3768a0af4d1c424f916403d78a577bb9\" returns successfully"
Jan 13 20:33:31.851635 containerd[1468]: time="2025-01-13T20:33:31.851592964Z" level=info msg="StopPodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\""
Jan 13 20:33:31.851691 containerd[1468]: time="2025-01-13T20:33:31.851673886Z" level=info msg="TearDown network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" successfully"
Jan 13 20:33:31.851691 containerd[1468]: time="2025-01-13T20:33:31.851687286Z" level=info msg="StopPodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" returns successfully"
Jan 13 20:33:31.852948 containerd[1468]: time="2025-01-13T20:33:31.852028454Z" level=info msg="RemovePodSandbox for \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\""
Jan 13 20:33:31.852948 containerd[1468]: time="2025-01-13T20:33:31.852055375Z" level=info msg="Forcibly stopping sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\""
Jan 13 20:33:31.852948 containerd[1468]: time="2025-01-13T20:33:31.852114136Z" level=info msg="TearDown network for sandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" successfully"
Jan 13 20:33:31.854389 containerd[1468]: time="2025-01-13T20:33:31.854349946Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.854430 containerd[1468]: time="2025-01-13T20:33:31.854395987Z" level=info msg="RemovePodSandbox \"c32a74c6158719d3d3dd47a43aa691fc67ffd8fadea51c76e1cabe46239ee209\" returns successfully"
Jan 13 20:33:31.854700 containerd[1468]: time="2025-01-13T20:33:31.854677793Z" level=info msg="StopPodSandbox for \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\""
Jan 13 20:33:31.855065 containerd[1468]: time="2025-01-13T20:33:31.854911959Z" level=info msg="TearDown network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\" successfully"
Jan 13 20:33:31.855065 containerd[1468]: time="2025-01-13T20:33:31.854943559Z" level=info msg="StopPodSandbox for \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\" returns successfully"
Jan 13 20:33:31.855241 containerd[1468]: time="2025-01-13T20:33:31.855220086Z" level=info msg="RemovePodSandbox for \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\""
Jan 13 20:33:31.855269 containerd[1468]: time="2025-01-13T20:33:31.855244886Z" level=info msg="Forcibly stopping sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\""
Jan 13 20:33:31.855350 containerd[1468]: time="2025-01-13T20:33:31.855300367Z" level=info msg="TearDown network for sandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\" successfully"
Jan 13 20:33:31.857350 containerd[1468]: time="2025-01-13T20:33:31.857322733Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.857408 containerd[1468]: time="2025-01-13T20:33:31.857368574Z" level=info msg="RemovePodSandbox \"86074dac164d545a6bdfaaede8e9762f2bdc237875019f4f8b3c5b856b84c244\" returns successfully"
Jan 13 20:33:31.857659 containerd[1468]: time="2025-01-13T20:33:31.857637420Z" level=info msg="StopPodSandbox for \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\""
Jan 13 20:33:31.857894 containerd[1468]: time="2025-01-13T20:33:31.857819984Z" level=info msg="TearDown network for sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\" successfully"
Jan 13 20:33:31.857894 containerd[1468]: time="2025-01-13T20:33:31.857836024Z" level=info msg="StopPodSandbox for \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\" returns successfully"
Jan 13 20:33:31.858134 containerd[1468]: time="2025-01-13T20:33:31.858093270Z" level=info msg="RemovePodSandbox for \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\""
Jan 13 20:33:31.858134 containerd[1468]: time="2025-01-13T20:33:31.858128351Z" level=info msg="Forcibly stopping sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\""
Jan 13 20:33:31.858204 containerd[1468]: time="2025-01-13T20:33:31.858190272Z" level=info msg="TearDown network for sandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\" successfully"
Jan 13 20:33:31.860560 containerd[1468]: time="2025-01-13T20:33:31.860517964Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus."
Jan 13 20:33:31.860622 containerd[1468]: time="2025-01-13T20:33:31.860564485Z" level=info msg="RemovePodSandbox \"0465478482940ca2e857f1225b7d983c4d14f34d9d233cb9aa6cbdcd6981dedd\" returns successfully"
Jan 13 20:33:34.565445 systemd[1]: Started sshd@19-10.0.0.151:22-10.0.0.1:56092.service - OpenSSH per-connection server daemon (10.0.0.1:56092).
Jan 13 20:33:34.604953 sshd[5508]: Accepted publickey for core from 10.0.0.1 port 56092 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:34.606024 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:34.609712 systemd-logind[1456]: New session 20 of user core.
Jan 13 20:33:34.616075 systemd[1]: Started session-20.scope - Session 20 of User core.
Jan 13 20:33:34.724938 sshd[5510]: Connection closed by 10.0.0.1 port 56092
Jan 13 20:33:34.725252 sshd-session[5508]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:34.728467 systemd[1]: sshd@19-10.0.0.151:22-10.0.0.1:56092.service: Deactivated successfully.
Jan 13 20:33:34.730221 systemd[1]: session-20.scope: Deactivated successfully.
Jan 13 20:33:34.730831 systemd-logind[1456]: Session 20 logged out. Waiting for processes to exit.
Jan 13 20:33:34.731548 systemd-logind[1456]: Removed session 20.
Jan 13 20:33:37.531362 kubelet[2539]: I0113 20:33:37.531272    2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 13 20:33:39.736607 systemd[1]: Started sshd@20-10.0.0.151:22-10.0.0.1:56106.service - OpenSSH per-connection server daemon (10.0.0.1:56106).
Jan 13 20:33:39.775978 sshd[5536]: Accepted publickey for core from 10.0.0.1 port 56106 ssh2: RSA SHA256:iH1z/OIMgfi4N9JZYqLIdSBLDStp/YciUtgOKDXSKOo
Jan 13 20:33:39.777032 sshd-session[5536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 13 20:33:39.780502 systemd-logind[1456]: New session 21 of user core.
Jan 13 20:33:39.787116 systemd[1]: Started session-21.scope - Session 21 of User core.
Jan 13 20:33:39.902101 sshd[5538]: Connection closed by 10.0.0.1 port 56106
Jan 13 20:33:39.902629 sshd-session[5536]: pam_unix(sshd:session): session closed for user core
Jan 13 20:33:39.905803 systemd[1]: sshd@20-10.0.0.151:22-10.0.0.1:56106.service: Deactivated successfully.
Jan 13 20:33:39.908277 systemd[1]: session-21.scope: Deactivated successfully.
Jan 13 20:33:39.909141 systemd-logind[1456]: Session 21 logged out. Waiting for processes to exit.
Jan 13 20:33:39.910240 systemd-logind[1456]: Removed session 21.
Jan 13 20:33:40.005176 kubelet[2539]: I0113 20:33:40.004979    2539 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"