Jan 29 12:08:13.925126 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1]
Jan 29 12:08:13.925148 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Wed Jan 29 10:12:48 -00 2025
Jan 29 12:08:13.925158 kernel: KASLR enabled
Jan 29 12:08:13.925164 kernel: efi: EFI v2.7 by EDK II
Jan 29 12:08:13.925170 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 
Jan 29 12:08:13.925176 kernel: random: crng init done
Jan 29 12:08:13.925183 kernel: ACPI: Early table checksum verification disabled
Jan 29 12:08:13.925189 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS )
Jan 29 12:08:13.925195 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS  BXPC     00000001      01000013)
Jan 29 12:08:13.925202 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925208 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925214 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925221 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925227 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925234 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925242 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925249 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925255 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 29 12:08:13.925262 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600
Jan 29 12:08:13.925280 kernel: NUMA: Failed to initialise from firmware
Jan 29 12:08:13.925287 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff]
Jan 29 12:08:13.925293 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff]
Jan 29 12:08:13.925300 kernel: Zone ranges:
Jan 29 12:08:13.925307 kernel:   DMA      [mem 0x0000000040000000-0x00000000dcffffff]
Jan 29 12:08:13.925314 kernel:   DMA32    empty
Jan 29 12:08:13.925322 kernel:   Normal   empty
Jan 29 12:08:13.925328 kernel: Movable zone start for each node
Jan 29 12:08:13.925334 kernel: Early memory node ranges
Jan 29 12:08:13.925341 kernel:   node   0: [mem 0x0000000040000000-0x00000000d976ffff]
Jan 29 12:08:13.925347 kernel:   node   0: [mem 0x00000000d9770000-0x00000000d9b3ffff]
Jan 29 12:08:13.925353 kernel:   node   0: [mem 0x00000000d9b40000-0x00000000dce1ffff]
Jan 29 12:08:13.925360 kernel:   node   0: [mem 0x00000000dce20000-0x00000000dceaffff]
Jan 29 12:08:13.925366 kernel:   node   0: [mem 0x00000000dceb0000-0x00000000dcebffff]
Jan 29 12:08:13.925373 kernel:   node   0: [mem 0x00000000dcec0000-0x00000000dcfdffff]
Jan 29 12:08:13.925379 kernel:   node   0: [mem 0x00000000dcfe0000-0x00000000dcffffff]
Jan 29 12:08:13.925386 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff]
Jan 29 12:08:13.925392 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges
Jan 29 12:08:13.925400 kernel: psci: probing for conduit method from ACPI.
Jan 29 12:08:13.925406 kernel: psci: PSCIv1.1 detected in firmware.
Jan 29 12:08:13.925413 kernel: psci: Using standard PSCI v0.2 function IDs
Jan 29 12:08:13.925422 kernel: psci: Trusted OS migration not required
Jan 29 12:08:13.925429 kernel: psci: SMC Calling Convention v1.1
Jan 29 12:08:13.925436 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003)
Jan 29 12:08:13.925444 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976
Jan 29 12:08:13.925451 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096
Jan 29 12:08:13.925459 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 
Jan 29 12:08:13.925465 kernel: Detected PIPT I-cache on CPU0
Jan 29 12:08:13.925472 kernel: CPU features: detected: GIC system register CPU interface
Jan 29 12:08:13.925479 kernel: CPU features: detected: Hardware dirty bit management
Jan 29 12:08:13.925485 kernel: CPU features: detected: Spectre-v4
Jan 29 12:08:13.925492 kernel: CPU features: detected: Spectre-BHB
Jan 29 12:08:13.925499 kernel: CPU features: kernel page table isolation forced ON by KASLR
Jan 29 12:08:13.925506 kernel: CPU features: detected: Kernel page table isolation (KPTI)
Jan 29 12:08:13.925513 kernel: CPU features: detected: ARM erratum 1418040
Jan 29 12:08:13.925520 kernel: CPU features: detected: SSBS not fully self-synchronizing
Jan 29 12:08:13.925527 kernel: alternatives: applying boot alternatives
Jan 29 12:08:13.925535 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c
Jan 29 12:08:13.925542 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space.
Jan 29 12:08:13.925549 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 29 12:08:13.925556 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear)
Jan 29 12:08:13.925562 kernel: Fallback order for Node 0: 0 
Jan 29 12:08:13.925569 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 633024
Jan 29 12:08:13.925575 kernel: Policy zone: DMA
Jan 29 12:08:13.925582 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 29 12:08:13.925602 kernel: software IO TLB: area num 4.
Jan 29 12:08:13.925610 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB)
Jan 29 12:08:13.925617 kernel: Memory: 2386532K/2572288K available (10240K kernel code, 2186K rwdata, 8096K rodata, 39360K init, 897K bss, 185756K reserved, 0K cma-reserved)
Jan 29 12:08:13.925624 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1
Jan 29 12:08:13.925631 kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 29 12:08:13.925638 kernel: rcu:         RCU event tracing is enabled.
Jan 29 12:08:13.925645 kernel: rcu:         RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4.
Jan 29 12:08:13.925651 kernel:         Trampoline variant of Tasks RCU enabled.
Jan 29 12:08:13.925660 kernel:         Tracing variant of Tasks RCU enabled.
Jan 29 12:08:13.925670 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 29 12:08:13.925678 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4
Jan 29 12:08:13.925685 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
Jan 29 12:08:13.925695 kernel: GICv3: 256 SPIs implemented
Jan 29 12:08:13.925702 kernel: GICv3: 0 Extended SPIs implemented
Jan 29 12:08:13.925713 kernel: Root IRQ handler: gic_handle_irq
Jan 29 12:08:13.925720 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI
Jan 29 12:08:13.925726 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000
Jan 29 12:08:13.925733 kernel: ITS [mem 0x08080000-0x0809ffff]
Jan 29 12:08:13.925740 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1)
Jan 29 12:08:13.925747 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1)
Jan 29 12:08:13.925754 kernel: GICv3: using LPI property table @0x00000000400f0000
Jan 29 12:08:13.925767 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000
Jan 29 12:08:13.925775 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 29 12:08:13.925784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 29 12:08:13.925791 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt).
Jan 29 12:08:13.925798 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns
Jan 29 12:08:13.925805 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns
Jan 29 12:08:13.925812 kernel: arm-pv: using stolen time PV
Jan 29 12:08:13.925819 kernel: Console: colour dummy device 80x25
Jan 29 12:08:13.925826 kernel: ACPI: Core revision 20230628
Jan 29 12:08:13.925833 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000)
Jan 29 12:08:13.925840 kernel: pid_max: default: 32768 minimum: 301
Jan 29 12:08:13.925847 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity
Jan 29 12:08:13.925855 kernel: landlock: Up and running.
Jan 29 12:08:13.925862 kernel: SELinux:  Initializing.
Jan 29 12:08:13.925869 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 29 12:08:13.925876 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear)
Jan 29 12:08:13.925883 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 29 12:08:13.925890 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4.
Jan 29 12:08:13.925897 kernel: rcu: Hierarchical SRCU implementation.
Jan 29 12:08:13.925904 kernel: rcu:         Max phase no-delay instances is 400.
Jan 29 12:08:13.925911 kernel: Platform MSI: ITS@0x8080000 domain created
Jan 29 12:08:13.925919 kernel: PCI/MSI: ITS@0x8080000 domain created
Jan 29 12:08:13.925926 kernel: Remapping and enabling EFI services.
Jan 29 12:08:13.925933 kernel: smp: Bringing up secondary CPUs ...
Jan 29 12:08:13.925940 kernel: Detected PIPT I-cache on CPU1
Jan 29 12:08:13.925946 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000
Jan 29 12:08:13.925954 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000
Jan 29 12:08:13.925960 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 29 12:08:13.925967 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1]
Jan 29 12:08:13.925974 kernel: Detected PIPT I-cache on CPU2
Jan 29 12:08:13.925981 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000
Jan 29 12:08:13.925990 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000
Jan 29 12:08:13.925997 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 29 12:08:13.926009 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1]
Jan 29 12:08:13.926018 kernel: Detected PIPT I-cache on CPU3
Jan 29 12:08:13.926026 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000
Jan 29 12:08:13.926033 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000
Jan 29 12:08:13.926040 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040
Jan 29 12:08:13.926047 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1]
Jan 29 12:08:13.926055 kernel: smp: Brought up 1 node, 4 CPUs
Jan 29 12:08:13.926063 kernel: SMP: Total of 4 processors activated.
Jan 29 12:08:13.926071 kernel: CPU features: detected: 32-bit EL0 Support
Jan 29 12:08:13.926078 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence
Jan 29 12:08:13.926085 kernel: CPU features: detected: Common not Private translations
Jan 29 12:08:13.926093 kernel: CPU features: detected: CRC32 instructions
Jan 29 12:08:13.926100 kernel: CPU features: detected: Enhanced Virtualization Traps
Jan 29 12:08:13.926107 kernel: CPU features: detected: RCpc load-acquire (LDAPR)
Jan 29 12:08:13.926114 kernel: CPU features: detected: LSE atomic instructions
Jan 29 12:08:13.926123 kernel: CPU features: detected: Privileged Access Never
Jan 29 12:08:13.926130 kernel: CPU features: detected: RAS Extension Support
Jan 29 12:08:13.926137 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS)
Jan 29 12:08:13.926144 kernel: CPU: All CPU(s) started at EL1
Jan 29 12:08:13.926152 kernel: alternatives: applying system-wide alternatives
Jan 29 12:08:13.926164 kernel: devtmpfs: initialized
Jan 29 12:08:13.926172 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 29 12:08:13.926179 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear)
Jan 29 12:08:13.926186 kernel: pinctrl core: initialized pinctrl subsystem
Jan 29 12:08:13.926195 kernel: SMBIOS 3.0.0 present.
Jan 29 12:08:13.926203 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023
Jan 29 12:08:13.926210 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 29 12:08:13.926218 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations
Jan 29 12:08:13.926225 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 29 12:08:13.926232 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 29 12:08:13.926239 kernel: audit: initializing netlink subsys (disabled)
Jan 29 12:08:13.926247 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1
Jan 29 12:08:13.926254 kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 29 12:08:13.926262 kernel: cpuidle: using governor menu
Jan 29 12:08:13.926270 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
Jan 29 12:08:13.926277 kernel: ASID allocator initialised with 32768 entries
Jan 29 12:08:13.926284 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 29 12:08:13.926291 kernel: Serial: AMBA PL011 UART driver
Jan 29 12:08:13.926299 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL
Jan 29 12:08:13.926306 kernel: Modules: 0 pages in range for non-PLT usage
Jan 29 12:08:13.926313 kernel: Modules: 509040 pages in range for PLT usage
Jan 29 12:08:13.926320 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 29 12:08:13.926329 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
Jan 29 12:08:13.926336 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages
Jan 29 12:08:13.926344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page
Jan 29 12:08:13.926351 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 29 12:08:13.926358 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page
Jan 29 12:08:13.926366 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages
Jan 29 12:08:13.926373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page
Jan 29 12:08:13.926380 kernel: ACPI: Added _OSI(Module Device)
Jan 29 12:08:13.926387 kernel: ACPI: Added _OSI(Processor Device)
Jan 29 12:08:13.926396 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Jan 29 12:08:13.926403 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 29 12:08:13.926411 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 29 12:08:13.926418 kernel: ACPI: Interpreter enabled
Jan 29 12:08:13.926426 kernel: ACPI: Using GIC for interrupt routing
Jan 29 12:08:13.926433 kernel: ACPI: MCFG table detected, 1 entries
Jan 29 12:08:13.926440 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA
Jan 29 12:08:13.926447 kernel: printk: console [ttyAMA0] enabled
Jan 29 12:08:13.926458 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 29 12:08:13.926642 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3]
Jan 29 12:08:13.926721 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR]
Jan 29 12:08:13.926834 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability]
Jan 29 12:08:13.926900 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00
Jan 29 12:08:13.926964 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff]
Jan 29 12:08:13.926974 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io  0x0000-0xffff window]
Jan 29 12:08:13.926981 kernel: PCI host bridge to bus 0000:00
Jan 29 12:08:13.927063 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window]
Jan 29 12:08:13.927140 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0xffff window]
Jan 29 12:08:13.927201 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window]
Jan 29 12:08:13.927259 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 29 12:08:13.927338 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000
Jan 29 12:08:13.927414 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00
Jan 29 12:08:13.927487 kernel: pci 0000:00:01.0: reg 0x10: [io  0x0000-0x001f]
Jan 29 12:08:13.927555 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff]
Jan 29 12:08:13.927665 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 29 12:08:13.927734 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref]
Jan 29 12:08:13.927808 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff]
Jan 29 12:08:13.927890 kernel: pci 0000:00:01.0: BAR 0: assigned [io  0x1000-0x101f]
Jan 29 12:08:13.927954 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window]
Jan 29 12:08:13.928018 kernel: pci_bus 0000:00: resource 5 [io  0x0000-0xffff window]
Jan 29 12:08:13.928078 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window]
Jan 29 12:08:13.928088 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35
Jan 29 12:08:13.928095 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36
Jan 29 12:08:13.928103 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37
Jan 29 12:08:13.928110 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38
Jan 29 12:08:13.928117 kernel: iommu: Default domain type: Translated
Jan 29 12:08:13.928124 kernel: iommu: DMA domain TLB invalidation policy: strict mode
Jan 29 12:08:13.928134 kernel: efivars: Registered efivars operations
Jan 29 12:08:13.928141 kernel: vgaarb: loaded
Jan 29 12:08:13.928149 kernel: clocksource: Switched to clocksource arch_sys_counter
Jan 29 12:08:13.928156 kernel: VFS: Disk quotas dquot_6.6.0
Jan 29 12:08:13.928195 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 29 12:08:13.928203 kernel: pnp: PnP ACPI init
Jan 29 12:08:13.928290 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved
Jan 29 12:08:13.928301 kernel: pnp: PnP ACPI: found 1 devices
Jan 29 12:08:13.928309 kernel: NET: Registered PF_INET protocol family
Jan 29 12:08:13.928320 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 29 12:08:13.928328 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear)
Jan 29 12:08:13.928335 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 29 12:08:13.928342 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear)
Jan 29 12:08:13.928350 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear)
Jan 29 12:08:13.928357 kernel: TCP: Hash tables configured (established 32768 bind 32768)
Jan 29 12:08:13.928365 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 29 12:08:13.928372 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear)
Jan 29 12:08:13.928381 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 29 12:08:13.928388 kernel: PCI: CLS 0 bytes, default 64
Jan 29 12:08:13.928395 kernel: kvm [1]: HYP mode not available
Jan 29 12:08:13.928403 kernel: Initialise system trusted keyrings
Jan 29 12:08:13.928410 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0
Jan 29 12:08:13.928417 kernel: Key type asymmetric registered
Jan 29 12:08:13.928425 kernel: Asymmetric key parser 'x509' registered
Jan 29 12:08:13.928432 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250)
Jan 29 12:08:13.928439 kernel: io scheduler mq-deadline registered
Jan 29 12:08:13.928446 kernel: io scheduler kyber registered
Jan 29 12:08:13.928455 kernel: io scheduler bfq registered
Jan 29 12:08:13.928463 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0
Jan 29 12:08:13.928470 kernel: ACPI: button: Power Button [PWRB]
Jan 29 12:08:13.928478 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36
Jan 29 12:08:13.928547 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007)
Jan 29 12:08:13.928557 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 29 12:08:13.928565 kernel: thunder_xcv, ver 1.0
Jan 29 12:08:13.928572 kernel: thunder_bgx, ver 1.0
Jan 29 12:08:13.928579 kernel: nicpf, ver 1.0
Jan 29 12:08:13.928600 kernel: nicvf, ver 1.0
Jan 29 12:08:13.928679 kernel: rtc-efi rtc-efi.0: registered as rtc0
Jan 29 12:08:13.928743 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-29T12:08:13 UTC (1738152493)
Jan 29 12:08:13.928757 kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 29 12:08:13.928774 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available
Jan 29 12:08:13.928782 kernel: watchdog: Delayed init of the lockup detector failed: -19
Jan 29 12:08:13.928789 kernel: watchdog: Hard watchdog permanently disabled
Jan 29 12:08:13.928796 kernel: NET: Registered PF_INET6 protocol family
Jan 29 12:08:13.928808 kernel: Segment Routing with IPv6
Jan 29 12:08:13.928816 kernel: In-situ OAM (IOAM) with IPv6
Jan 29 12:08:13.928823 kernel: NET: Registered PF_PACKET protocol family
Jan 29 12:08:13.928830 kernel: Key type dns_resolver registered
Jan 29 12:08:13.928837 kernel: registered taskstats version 1
Jan 29 12:08:13.928845 kernel: Loading compiled-in X.509 certificates
Jan 29 12:08:13.928852 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: f200c60883a4a38d496d9250faf693faee9d7415'
Jan 29 12:08:13.928859 kernel: Key type .fscrypt registered
Jan 29 12:08:13.928866 kernel: Key type fscrypt-provisioning registered
Jan 29 12:08:13.928876 kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 29 12:08:13.928884 kernel: ima: Allocated hash algorithm: sha1
Jan 29 12:08:13.928891 kernel: ima: No architecture policies found
Jan 29 12:08:13.928899 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng)
Jan 29 12:08:13.928906 kernel: clk: Disabling unused clocks
Jan 29 12:08:13.928913 kernel: Freeing unused kernel memory: 39360K
Jan 29 12:08:13.928920 kernel: Run /init as init process
Jan 29 12:08:13.928927 kernel:   with arguments:
Jan 29 12:08:13.928934 kernel:     /init
Jan 29 12:08:13.928943 kernel:   with environment:
Jan 29 12:08:13.928950 kernel:     HOME=/
Jan 29 12:08:13.928957 kernel:     TERM=linux
Jan 29 12:08:13.928964 kernel:     BOOT_IMAGE=/flatcar/vmlinuz-a
Jan 29 12:08:13.928974 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 29 12:08:13.928983 systemd[1]: Detected virtualization kvm.
Jan 29 12:08:13.928991 systemd[1]: Detected architecture arm64.
Jan 29 12:08:13.929000 systemd[1]: Running in initrd.
Jan 29 12:08:13.929008 systemd[1]: No hostname configured, using default hostname.
Jan 29 12:08:13.929015 systemd[1]: Hostname set to <localhost>.
Jan 29 12:08:13.929023 systemd[1]: Initializing machine ID from VM UUID.
Jan 29 12:08:13.929031 systemd[1]: Queued start job for default target initrd.target.
Jan 29 12:08:13.929038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 12:08:13.929046 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 12:08:13.929055 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM...
Jan 29 12:08:13.929064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 29 12:08:13.929072 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT...
Jan 29 12:08:13.929080 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A...
Jan 29 12:08:13.929089 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132...
Jan 29 12:08:13.929098 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr...
Jan 29 12:08:13.929105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 12:08:13.929113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 29 12:08:13.929123 systemd[1]: Reached target paths.target - Path Units.
Jan 29 12:08:13.929130 systemd[1]: Reached target slices.target - Slice Units.
Jan 29 12:08:13.929138 systemd[1]: Reached target swap.target - Swaps.
Jan 29 12:08:13.929146 systemd[1]: Reached target timers.target - Timer Units.
Jan 29 12:08:13.929154 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket.
Jan 29 12:08:13.929161 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 29 12:08:13.929169 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log).
Jan 29 12:08:13.929177 systemd[1]: Listening on systemd-journald.socket - Journal Socket.
Jan 29 12:08:13.929185 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 12:08:13.929195 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 29 12:08:13.929202 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 12:08:13.929210 systemd[1]: Reached target sockets.target - Socket Units.
Jan 29 12:08:13.929218 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup...
Jan 29 12:08:13.929226 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 29 12:08:13.929234 systemd[1]: Finished network-cleanup.service - Network Cleanup.
Jan 29 12:08:13.929242 systemd[1]: Starting systemd-fsck-usr.service...
Jan 29 12:08:13.929250 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 29 12:08:13.929259 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 29 12:08:13.929267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 12:08:13.929275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup.
Jan 29 12:08:13.929283 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 12:08:13.929291 systemd[1]: Finished systemd-fsck-usr.service.
Jan 29 12:08:13.929319 systemd-journald[239]: Collecting audit messages is disabled.
Jan 29 12:08:13.929347 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 29 12:08:13.929356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 12:08:13.929364 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 29 12:08:13.929373 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 12:08:13.929381 kernel: Bridge firewalling registered
Jan 29 12:08:13.929389 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 29 12:08:13.929397 systemd-journald[239]: Journal started
Jan 29 12:08:13.929416 systemd-journald[239]: Runtime Journal (/run/log/journal/960a774ce52443fa93664bdbf89cd58b) is 5.9M, max 47.3M, 41.4M free.
Jan 29 12:08:13.914606 systemd-modules-load[240]: Inserted module 'overlay'
Jan 29 12:08:13.931652 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 29 12:08:13.927571 systemd-modules-load[240]: Inserted module 'br_netfilter'
Jan 29 12:08:13.932715 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 29 12:08:13.935497 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 29 12:08:13.937547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 29 12:08:13.939753 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 29 12:08:13.949542 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 12:08:13.952886 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 29 12:08:13.954822 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 12:08:13.955997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 12:08:13.967776 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook...
Jan 29 12:08:13.969819 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 29 12:08:13.982801 dracut-cmdline[275]: dracut-dracut-053
Jan 29 12:08:13.985348 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=05d22c8845dec898f2b35f78b7d946edccf803dd23b974a9db2c3070ca1d8f8c
Jan 29 12:08:14.004028 systemd-resolved[277]: Positive Trust Anchors:
Jan 29 12:08:14.004045 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 29 12:08:14.004077 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 29 12:08:14.008957 systemd-resolved[277]: Defaulting to hostname 'linux'.
Jan 29 12:08:14.010023 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 29 12:08:14.012616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 29 12:08:14.057616 kernel: SCSI subsystem initialized
Jan 29 12:08:14.062603 kernel: Loading iSCSI transport class v2.0-870.
Jan 29 12:08:14.069606 kernel: iscsi: registered transport (tcp)
Jan 29 12:08:14.082904 kernel: iscsi: registered transport (qla4xxx)
Jan 29 12:08:14.082965 kernel: QLogic iSCSI HBA Driver
Jan 29 12:08:14.129364 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook.
Jan 29 12:08:14.144831 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook...
Jan 29 12:08:14.160283 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 29 12:08:14.160336 kernel: device-mapper: uevent: version 1.0.3
Jan 29 12:08:14.160356 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com
Jan 29 12:08:14.207613 kernel: raid6: neonx8   gen() 15757 MB/s
Jan 29 12:08:14.224607 kernel: raid6: neonx4   gen() 15665 MB/s
Jan 29 12:08:14.241605 kernel: raid6: neonx2   gen() 13242 MB/s
Jan 29 12:08:14.258600 kernel: raid6: neonx1   gen() 10488 MB/s
Jan 29 12:08:14.275603 kernel: raid6: int64x8  gen()  6953 MB/s
Jan 29 12:08:14.292601 kernel: raid6: int64x4  gen()  7349 MB/s
Jan 29 12:08:14.309601 kernel: raid6: int64x2  gen()  6131 MB/s
Jan 29 12:08:14.326603 kernel: raid6: int64x1  gen()  5053 MB/s
Jan 29 12:08:14.326622 kernel: raid6: using algorithm neonx8 gen() 15757 MB/s
Jan 29 12:08:14.343609 kernel: raid6: .... xor() 11926 MB/s, rmw enabled
Jan 29 12:08:14.343630 kernel: raid6: using neon recovery algorithm
Jan 29 12:08:14.348631 kernel: xor: measuring software checksum speed
Jan 29 12:08:14.348647 kernel:    8regs           : 19812 MB/sec
Jan 29 12:08:14.349650 kernel:    32regs          : 19322 MB/sec
Jan 29 12:08:14.349662 kernel:    arm64_neon      : 27034 MB/sec
Jan 29 12:08:14.349671 kernel: xor: using function: arm64_neon (27034 MB/sec)
Jan 29 12:08:14.401615 kernel: Btrfs loaded, zoned=no, fsverity=no
Jan 29 12:08:14.414212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook.
Jan 29 12:08:14.426804 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 12:08:14.438117 systemd-udevd[459]: Using default interface naming scheme 'v255'.
Jan 29 12:08:14.441404 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 12:08:14.444078 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook...
Jan 29 12:08:14.460923 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation
Jan 29 12:08:14.490296 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 29 12:08:14.505838 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 29 12:08:14.547031 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 12:08:14.560826 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook...
Jan 29 12:08:14.571627 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook.
Jan 29 12:08:14.574094 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 29 12:08:14.576086 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 12:08:14.577909 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 29 12:08:14.584808 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook...
Jan 29 12:08:14.594655 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook.
Jan 29 12:08:14.604187 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 29 12:08:14.604310 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 12:08:14.608541 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues
Jan 29 12:08:14.615497 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB)
Jan 29 12:08:14.615630 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk.
Jan 29 12:08:14.615650 kernel: GPT:9289727 != 19775487
Jan 29 12:08:14.615662 kernel: GPT:Alternate GPT header not at the end of the disk.
Jan 29 12:08:14.615672 kernel: GPT:9289727 != 19775487
Jan 29 12:08:14.615681 kernel: GPT: Use GNU Parted to correct GPT errors.
Jan 29 12:08:14.615693 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 12:08:14.608692 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 12:08:14.609517 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 29 12:08:14.609681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 12:08:14.612552 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 12:08:14.622843 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 12:08:14.634620 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (524)
Jan 29 12:08:14.634680 kernel: BTRFS: device fsid f02ec3fd-6702-4c1a-b68e-9001713a3a08 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (519)
Jan 29 12:08:14.636303 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM.
Jan 29 12:08:14.643734 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 12:08:14.650636 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT.
Jan 29 12:08:14.654642 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A.
Jan 29 12:08:14.655620 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132.
Jan 29 12:08:14.661012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 29 12:08:14.672756 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary...
Jan 29 12:08:14.674513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters...
Jan 29 12:08:14.681805 disk-uuid[552]: Primary Header is updated.
Jan 29 12:08:14.681805 disk-uuid[552]: Secondary Entries is updated.
Jan 29 12:08:14.681805 disk-uuid[552]: Secondary Header is updated.
Jan 29 12:08:14.685610 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 12:08:14.709085 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 12:08:15.719640 kernel:  vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9
Jan 29 12:08:15.720405 disk-uuid[553]: The operation has completed successfully.
Jan 29 12:08:15.746882 systemd[1]: disk-uuid.service: Deactivated successfully.
Jan 29 12:08:15.746977 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary.
Jan 29 12:08:15.771750 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr...
Jan 29 12:08:15.777082 sh[574]: Success
Jan 29 12:08:15.807617 kernel: device-mapper: verity: sha256 using implementation "sha256-ce"
Jan 29 12:08:15.860135 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr.
Jan 29 12:08:15.863050 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr...
Jan 29 12:08:15.865145 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr.
Jan 29 12:08:15.875364 kernel: BTRFS info (device dm-0): first mount of filesystem f02ec3fd-6702-4c1a-b68e-9001713a3a08
Jan 29 12:08:15.875402 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm
Jan 29 12:08:15.875413 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead
Jan 29 12:08:15.875423 kernel: BTRFS info (device dm-0): disabling log replay at mount time
Jan 29 12:08:15.875942 kernel: BTRFS info (device dm-0): using free space tree
Jan 29 12:08:15.880710 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr.
Jan 29 12:08:15.881546 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met.
Jan 29 12:08:15.895799 systemd[1]: Starting ignition-setup.service - Ignition (setup)...
Jan 29 12:08:15.897130 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline...
Jan 29 12:08:15.908270 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a
Jan 29 12:08:15.908306 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 29 12:08:15.908875 kernel: BTRFS info (device vda6): using free space tree
Jan 29 12:08:15.912211 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 12:08:15.918937 systemd[1]: mnt-oem.mount: Deactivated successfully.
Jan 29 12:08:15.921298 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a
Jan 29 12:08:15.929624 systemd[1]: Finished ignition-setup.service - Ignition (setup).
Jan 29 12:08:15.939789 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)...
Jan 29 12:08:16.014301 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 29 12:08:16.024831 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 29 12:08:16.056836 systemd-networkd[759]: lo: Link UP
Jan 29 12:08:16.056847 systemd-networkd[759]: lo: Gained carrier
Jan 29 12:08:16.057494 systemd-networkd[759]: Enumeration completed
Jan 29 12:08:16.057942 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 12:08:16.057945 systemd-networkd[759]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 29 12:08:16.058437 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 29 12:08:16.061988 systemd-networkd[759]: eth0: Link UP
Jan 29 12:08:16.061991 systemd-networkd[759]: eth0: Gained carrier
Jan 29 12:08:16.061998 systemd-networkd[759]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 12:08:16.064367 systemd[1]: Reached target network.target - Network.
Jan 29 12:08:16.076254 ignition[666]: Ignition 2.19.0
Jan 29 12:08:16.076263 ignition[666]: Stage: fetch-offline
Jan 29 12:08:16.076294 ignition[666]: no configs at "/usr/lib/ignition/base.d"
Jan 29 12:08:16.076302 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 12:08:16.078636 systemd-networkd[759]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 29 12:08:16.076450 ignition[666]: parsed url from cmdline: ""
Jan 29 12:08:16.076453 ignition[666]: no config URL provided
Jan 29 12:08:16.076457 ignition[666]: reading system config file "/usr/lib/ignition/user.ign"
Jan 29 12:08:16.076464 ignition[666]: no config at "/usr/lib/ignition/user.ign"
Jan 29 12:08:16.076487 ignition[666]: op(1): [started]  loading QEMU firmware config module
Jan 29 12:08:16.076493 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg"
Jan 29 12:08:16.085544 ignition[666]: op(1): [finished] loading QEMU firmware config module
Jan 29 12:08:16.091275 ignition[666]: parsing config with SHA512: bf7e3f1f9ffb0d282508c5964e7cdcc3fc16988bb64a4178bca20e35e31262f03f1a31fb38ba33372fccf245818e77baafeccfe3abb6c69d51ab23d3de6d5796
Jan 29 12:08:16.094349 unknown[666]: fetched base config from "system"
Jan 29 12:08:16.094359 unknown[666]: fetched user config from "qemu"
Jan 29 12:08:16.094757 ignition[666]: fetch-offline: fetch-offline passed
Jan 29 12:08:16.094830 ignition[666]: Ignition finished successfully
Jan 29 12:08:16.096922 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 29 12:08:16.100114 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json).
Jan 29 12:08:16.108843 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)...
Jan 29 12:08:16.118919 ignition[771]: Ignition 2.19.0
Jan 29 12:08:16.118933 ignition[771]: Stage: kargs
Jan 29 12:08:16.119094 ignition[771]: no configs at "/usr/lib/ignition/base.d"
Jan 29 12:08:16.119104 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 12:08:16.119800 ignition[771]: kargs: kargs passed
Jan 29 12:08:16.122539 systemd[1]: Finished ignition-kargs.service - Ignition (kargs).
Jan 29 12:08:16.119847 ignition[771]: Ignition finished successfully
Jan 29 12:08:16.124772 systemd[1]: Starting ignition-disks.service - Ignition (disks)...
Jan 29 12:08:16.138870 ignition[778]: Ignition 2.19.0
Jan 29 12:08:16.138879 ignition[778]: Stage: disks
Jan 29 12:08:16.139043 ignition[778]: no configs at "/usr/lib/ignition/base.d"
Jan 29 12:08:16.139052 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 12:08:16.139719 ignition[778]: disks: disks passed
Jan 29 12:08:16.139767 ignition[778]: Ignition finished successfully
Jan 29 12:08:16.142486 systemd[1]: Finished ignition-disks.service - Ignition (disks).
Jan 29 12:08:16.144138 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device.
Jan 29 12:08:16.145717 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems.
Jan 29 12:08:16.146568 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 29 12:08:16.148117 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 29 12:08:16.149454 systemd[1]: Reached target basic.target - Basic System.
Jan 29 12:08:16.156754 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT...
Jan 29 12:08:16.166113 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks
Jan 29 12:08:16.169979 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT.
Jan 29 12:08:16.179678 systemd[1]: Mounting sysroot.mount - /sysroot...
Jan 29 12:08:16.220606 kernel: EXT4-fs (vda9): mounted filesystem 8499bb43-f860-448d-b3b8-5a1fc2b80abf r/w with ordered data mode. Quota mode: none.
Jan 29 12:08:16.221249 systemd[1]: Mounted sysroot.mount - /sysroot.
Jan 29 12:08:16.222262 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System.
Jan 29 12:08:16.231660 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 29 12:08:16.233499 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr...
Jan 29 12:08:16.234356 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met.
Jan 29 12:08:16.234391 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot).
Jan 29 12:08:16.234410 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 29 12:08:16.239202 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr.
Jan 29 12:08:16.240560 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup...
Jan 29 12:08:16.246184 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (795)
Jan 29 12:08:16.246221 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a
Jan 29 12:08:16.246232 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 29 12:08:16.247598 kernel: BTRFS info (device vda6): using free space tree
Jan 29 12:08:16.249630 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 12:08:16.250770 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 29 12:08:16.284222 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory
Jan 29 12:08:16.288002 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory
Jan 29 12:08:16.291552 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory
Jan 29 12:08:16.295375 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory
Jan 29 12:08:16.363187 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup.
Jan 29 12:08:16.372665 systemd[1]: Starting ignition-mount.service - Ignition (mount)...
Jan 29 12:08:16.374164 systemd[1]: Starting sysroot-boot.service - /sysroot/boot...
Jan 29 12:08:16.380598 kernel: BTRFS info (device vda6): last unmount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a
Jan 29 12:08:16.394091 systemd[1]: Finished sysroot-boot.service - /sysroot/boot.
Jan 29 12:08:16.395815 ignition[909]: INFO     : Ignition 2.19.0
Jan 29 12:08:16.395815 ignition[909]: INFO     : Stage: mount
Jan 29 12:08:16.396978 ignition[909]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 12:08:16.396978 ignition[909]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 12:08:16.396978 ignition[909]: INFO     : mount: mount passed
Jan 29 12:08:16.398994 ignition[909]: INFO     : Ignition finished successfully
Jan 29 12:08:16.398854 systemd[1]: Finished ignition-mount.service - Ignition (mount).
Jan 29 12:08:16.410679 systemd[1]: Starting ignition-files.service - Ignition (files)...
Jan 29 12:08:16.873973 systemd[1]: sysroot-oem.mount: Deactivated successfully.
Jan 29 12:08:16.888795 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem...
Jan 29 12:08:16.898626 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (923)
Jan 29 12:08:16.900775 kernel: BTRFS info (device vda6): first mount of filesystem db40e17a-cddf-4890-8d80-4d8cda0a956a
Jan 29 12:08:16.900792 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm
Jan 29 12:08:16.900803 kernel: BTRFS info (device vda6): using free space tree
Jan 29 12:08:16.903613 kernel: BTRFS info (device vda6): auto enabling async discard
Jan 29 12:08:16.904708 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem.
Jan 29 12:08:16.931619 ignition[940]: INFO     : Ignition 2.19.0
Jan 29 12:08:16.931619 ignition[940]: INFO     : Stage: files
Jan 29 12:08:16.931619 ignition[940]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 12:08:16.931619 ignition[940]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 12:08:16.936425 ignition[940]: DEBUG    : files: compiled without relabeling support, skipping
Jan 29 12:08:16.937620 ignition[940]: INFO     : files: ensureUsers: op(1): [started]  creating or modifying user "core"
Jan 29 12:08:16.939004 ignition[940]: DEBUG    : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core"
Jan 29 12:08:16.942376 ignition[940]: INFO     : files: ensureUsers: op(1): [finished] creating or modifying user "core"
Jan 29 12:08:16.943857 ignition[940]: INFO     : files: ensureUsers: op(2): [started]  adding ssh keys to user "core"
Jan 29 12:08:16.945600 unknown[940]: wrote ssh authorized keys file for user: core
Jan 29 12:08:16.946793 ignition[940]: INFO     : files: ensureUsers: op(2): [finished] adding ssh keys to user "core"
Jan 29 12:08:16.949239 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [started]  writing file "/sysroot/home/core/install.sh"
Jan 29 12:08:16.950999 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh"
Jan 29 12:08:16.950999 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [started]  writing file "/sysroot/etc/flatcar/update.conf"
Jan 29 12:08:16.950999 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf"
Jan 29 12:08:16.950999 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [started]  writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Jan 29 12:08:16.950999 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Jan 29 12:08:16.950999 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [started]  writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Jan 29 12:08:16.950999 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1
Jan 29 12:08:17.237737 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(6): GET result: OK
Jan 29 12:08:17.466020 ignition[940]: INFO     : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw"
Jan 29 12:08:17.466020 ignition[940]: INFO     : files: op(7): [started]  processing unit "coreos-metadata.service"
Jan 29 12:08:17.468935 ignition[940]: INFO     : files: op(7): op(8): [started]  writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 29 12:08:17.468935 ignition[940]: INFO     : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service"
Jan 29 12:08:17.468935 ignition[940]: INFO     : files: op(7): [finished] processing unit "coreos-metadata.service"
Jan 29 12:08:17.468935 ignition[940]: INFO     : files: op(9): [started]  setting preset to disabled for "coreos-metadata.service"
Jan 29 12:08:17.503412 ignition[940]: INFO     : files: op(9): op(a): [started]  removing enablement symlink(s) for "coreos-metadata.service"
Jan 29 12:08:17.507410 ignition[940]: INFO     : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service"
Jan 29 12:08:17.508903 ignition[940]: INFO     : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service"
Jan 29 12:08:17.508903 ignition[940]: INFO     : files: createResultFile: createFiles: op(b): [started]  writing file "/sysroot/etc/.ignition-result.json"
Jan 29 12:08:17.508903 ignition[940]: INFO     : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json"
Jan 29 12:08:17.508903 ignition[940]: INFO     : files: files passed
Jan 29 12:08:17.508903 ignition[940]: INFO     : Ignition finished successfully
Jan 29 12:08:17.510886 systemd[1]: Finished ignition-files.service - Ignition (files).
Jan 29 12:08:17.521799 systemd[1]: Starting ignition-quench.service - Ignition (record completion)...
Jan 29 12:08:17.523474 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion...
Jan 29 12:08:17.526740 systemd[1]: ignition-quench.service: Deactivated successfully.
Jan 29 12:08:17.529487 systemd[1]: Finished ignition-quench.service - Ignition (record completion).
Jan 29 12:08:17.532117 initrd-setup-root-after-ignition[967]: grep: /sysroot/oem/oem-release: No such file or directory
Jan 29 12:08:17.534447 initrd-setup-root-after-ignition[970]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 12:08:17.534447 initrd-setup-root-after-ignition[970]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 12:08:17.537037 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory
Jan 29 12:08:17.536385 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 29 12:08:17.538303 systemd[1]: Reached target ignition-complete.target - Ignition Complete.
Jan 29 12:08:17.552961 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root...
Jan 29 12:08:17.574793 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 29 12:08:17.575627 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root.
Jan 29 12:08:17.576805 systemd[1]: Reached target initrd-fs.target - Initrd File Systems.
Jan 29 12:08:17.578316 systemd[1]: Reached target initrd.target - Initrd Default Target.
Jan 29 12:08:17.579133 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met.
Jan 29 12:08:17.579931 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook...
Jan 29 12:08:17.597645 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 29 12:08:17.603776 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons...
Jan 29 12:08:17.613176 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups.
Jan 29 12:08:17.614185 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 12:08:17.615693 systemd[1]: Stopped target timers.target - Timer Units.
Jan 29 12:08:17.617046 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 29 12:08:17.617164 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook.
Jan 29 12:08:17.619047 systemd[1]: Stopped target initrd.target - Initrd Default Target.
Jan 29 12:08:17.620460 systemd[1]: Stopped target basic.target - Basic System.
Jan 29 12:08:17.621741 systemd[1]: Stopped target ignition-complete.target - Ignition Complete.
Jan 29 12:08:17.622999 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup.
Jan 29 12:08:17.624438 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device.
Jan 29 12:08:17.625904 systemd[1]: Stopped target remote-fs.target - Remote File Systems.
Jan 29 12:08:17.627262 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems.
Jan 29 12:08:17.628693 systemd[1]: Stopped target sysinit.target - System Initialization.
Jan 29 12:08:17.630237 systemd[1]: Stopped target local-fs.target - Local File Systems.
Jan 29 12:08:17.631573 systemd[1]: Stopped target swap.target - Swaps.
Jan 29 12:08:17.632715 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 29 12:08:17.632838 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook.
Jan 29 12:08:17.634533 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes.
Jan 29 12:08:17.635982 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 12:08:17.637392 systemd[1]: clevis-luks-askpass.path: Deactivated successfully.
Jan 29 12:08:17.640679 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 12:08:17.641620 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 29 12:08:17.641746 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook.
Jan 29 12:08:17.643912 systemd[1]: ignition-fetch-offline.service: Deactivated successfully.
Jan 29 12:08:17.644028 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline).
Jan 29 12:08:17.645500 systemd[1]: Stopped target paths.target - Path Units.
Jan 29 12:08:17.646658 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 29 12:08:17.650668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 12:08:17.652617 systemd[1]: Stopped target slices.target - Slice Units.
Jan 29 12:08:17.653326 systemd[1]: Stopped target sockets.target - Socket Units.
Jan 29 12:08:17.654485 systemd[1]: iscsid.socket: Deactivated successfully.
Jan 29 12:08:17.654604 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket.
Jan 29 12:08:17.655753 systemd[1]: iscsiuio.socket: Deactivated successfully.
Jan 29 12:08:17.655831 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket.
Jan 29 12:08:17.656955 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully.
Jan 29 12:08:17.657056 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion.
Jan 29 12:08:17.658377 systemd[1]: ignition-files.service: Deactivated successfully.
Jan 29 12:08:17.658472 systemd[1]: Stopped ignition-files.service - Ignition (files).
Jan 29 12:08:17.671862 systemd[1]: Stopping ignition-mount.service - Ignition (mount)...
Jan 29 12:08:17.673254 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot...
Jan 29 12:08:17.673936 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 29 12:08:17.674054 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 12:08:17.675430 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 29 12:08:17.675521 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook.
Jan 29 12:08:17.680958 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 29 12:08:17.681774 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons.
Jan 29 12:08:17.686399 systemd[1]: sysroot-boot.mount: Deactivated successfully.
Jan 29 12:08:17.687550 ignition[994]: INFO     : Ignition 2.19.0
Jan 29 12:08:17.687550 ignition[994]: INFO     : Stage: umount
Jan 29 12:08:17.690329 ignition[994]: INFO     : no configs at "/usr/lib/ignition/base.d"
Jan 29 12:08:17.690329 ignition[994]: INFO     : no config dir at "/usr/lib/ignition/base.platform.d/qemu"
Jan 29 12:08:17.690329 ignition[994]: INFO     : umount: umount passed
Jan 29 12:08:17.690329 ignition[994]: INFO     : Ignition finished successfully
Jan 29 12:08:17.691335 systemd[1]: ignition-mount.service: Deactivated successfully.
Jan 29 12:08:17.692616 systemd[1]: Stopped ignition-mount.service - Ignition (mount).
Jan 29 12:08:17.693874 systemd[1]: Stopped target network.target - Network.
Jan 29 12:08:17.695311 systemd[1]: ignition-disks.service: Deactivated successfully.
Jan 29 12:08:17.695382 systemd[1]: Stopped ignition-disks.service - Ignition (disks).
Jan 29 12:08:17.696823 systemd[1]: ignition-kargs.service: Deactivated successfully.
Jan 29 12:08:17.696862 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs).
Jan 29 12:08:17.698166 systemd[1]: ignition-setup.service: Deactivated successfully.
Jan 29 12:08:17.698205 systemd[1]: Stopped ignition-setup.service - Ignition (setup).
Jan 29 12:08:17.699508 systemd[1]: ignition-setup-pre.service: Deactivated successfully.
Jan 29 12:08:17.699552 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup.
Jan 29 12:08:17.701030 systemd[1]: Stopping systemd-networkd.service - Network Configuration...
Jan 29 12:08:17.702386 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution...
Jan 29 12:08:17.707628 systemd-networkd[759]: eth0: DHCPv6 lease lost
Jan 29 12:08:17.709439 systemd[1]: systemd-networkd.service: Deactivated successfully.
Jan 29 12:08:17.709582 systemd[1]: Stopped systemd-networkd.service - Network Configuration.
Jan 29 12:08:17.710788 systemd[1]: systemd-resolved.service: Deactivated successfully.
Jan 29 12:08:17.710877 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution.
Jan 29 12:08:17.713748 systemd[1]: systemd-networkd.socket: Deactivated successfully.
Jan 29 12:08:17.713799 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 12:08:17.726696 systemd[1]: Stopping network-cleanup.service - Network Cleanup...
Jan 29 12:08:17.727614 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully.
Jan 29 12:08:17.727683 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline.
Jan 29 12:08:17.729474 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 29 12:08:17.729518 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables.
Jan 29 12:08:17.731430 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 29 12:08:17.731476 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules.
Jan 29 12:08:17.733488 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 29 12:08:17.733533 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 12:08:17.735261 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 12:08:17.744544 systemd[1]: network-cleanup.service: Deactivated successfully.
Jan 29 12:08:17.744696 systemd[1]: Stopped network-cleanup.service - Network Cleanup.
Jan 29 12:08:17.749321 systemd[1]: sysroot-boot.service: Deactivated successfully.
Jan 29 12:08:17.749429 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot.
Jan 29 12:08:17.751266 systemd[1]: initrd-setup-root.service: Deactivated successfully.
Jan 29 12:08:17.751307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup.
Jan 29 12:08:17.755272 systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 29 12:08:17.755399 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 12:08:17.757503 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 29 12:08:17.757540 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket.
Jan 29 12:08:17.759034 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 29 12:08:17.759068 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 12:08:17.760565 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 29 12:08:17.760621 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook.
Jan 29 12:08:17.762755 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 29 12:08:17.762796 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook.
Jan 29 12:08:17.765001 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully.
Jan 29 12:08:17.765044 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters.
Jan 29 12:08:17.777787 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database...
Jan 29 12:08:17.778865 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 29 12:08:17.778920 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 12:08:17.780824 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully.
Jan 29 12:08:17.780866 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 29 12:08:17.782665 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 29 12:08:17.782716 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 12:08:17.784528 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 29 12:08:17.784571 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 12:08:17.786485 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 29 12:08:17.786562 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database.
Jan 29 12:08:17.790043 systemd[1]: Reached target initrd-switch-root.target - Switch Root.
Jan 29 12:08:17.791753 systemd[1]: Starting initrd-switch-root.service - Switch Root...
Jan 29 12:08:17.801392 systemd[1]: Switching root.
Jan 29 12:08:17.820410 systemd-journald[239]: Journal stopped
Jan 29 12:08:18.470913 systemd-journald[239]: Received SIGTERM from PID 1 (systemd).
Jan 29 12:08:18.470972 kernel: SELinux:  policy capability network_peer_controls=1
Jan 29 12:08:18.470985 kernel: SELinux:  policy capability open_perms=1
Jan 29 12:08:18.470995 kernel: SELinux:  policy capability extended_socket_class=1
Jan 29 12:08:18.471007 kernel: SELinux:  policy capability always_check_network=0
Jan 29 12:08:18.471017 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 29 12:08:18.471027 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 29 12:08:18.471036 kernel: SELinux:  policy capability genfs_seclabel_symlinks=0
Jan 29 12:08:18.471050 kernel: SELinux:  policy capability ioctl_skip_cloexec=0
Jan 29 12:08:18.471060 kernel: audit: type=1403 audit(1738152497.938:2): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 29 12:08:18.471070 systemd[1]: Successfully loaded SELinux policy in 30.220ms.
Jan 29 12:08:18.471092 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.764ms.
Jan 29 12:08:18.471103 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified)
Jan 29 12:08:18.471116 systemd[1]: Detected virtualization kvm.
Jan 29 12:08:18.471127 systemd[1]: Detected architecture arm64.
Jan 29 12:08:18.471137 systemd[1]: Detected first boot.
Jan 29 12:08:18.471148 systemd[1]: Initializing machine ID from VM UUID.
Jan 29 12:08:18.471158 zram_generator::config[1039]: No configuration found.
Jan 29 12:08:18.471170 systemd[1]: Populated /etc with preset unit settings.
Jan 29 12:08:18.471180 systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 29 12:08:18.471192 systemd[1]: Stopped initrd-switch-root.service - Switch Root.
Jan 29 12:08:18.471203 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 29 12:08:18.471214 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config.
Jan 29 12:08:18.471225 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run.
Jan 29 12:08:18.471235 systemd[1]: Created slice system-getty.slice - Slice /system/getty.
Jan 29 12:08:18.471246 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe.
Jan 29 12:08:18.471258 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty.
Jan 29 12:08:18.471269 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit.
Jan 29 12:08:18.471280 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck.
Jan 29 12:08:18.471291 systemd[1]: Created slice user.slice - User and Session Slice.
Jan 29 12:08:18.471301 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch.
Jan 29 12:08:18.471322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch.
Jan 29 12:08:18.471334 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch.
Jan 29 12:08:18.471345 systemd[1]: Set up automount boot.automount - Boot partition Automount Point.
Jan 29 12:08:18.471355 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point.
Jan 29 12:08:18.471369 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM...
Jan 29 12:08:18.471380 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0...
Jan 29 12:08:18.471390 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre).
Jan 29 12:08:18.471401 systemd[1]: Stopped target initrd-switch-root.target - Switch Root.
Jan 29 12:08:18.471411 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems.
Jan 29 12:08:18.471422 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System.
Jan 29 12:08:18.471432 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes.
Jan 29 12:08:18.471444 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes.
Jan 29 12:08:18.471456 systemd[1]: Reached target remote-fs.target - Remote File Systems.
Jan 29 12:08:18.471466 systemd[1]: Reached target slices.target - Slice Units.
Jan 29 12:08:18.471477 systemd[1]: Reached target swap.target - Swaps.
Jan 29 12:08:18.471487 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes.
Jan 29 12:08:18.471498 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket.
Jan 29 12:08:18.471509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket.
Jan 29 12:08:18.471519 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket.
Jan 29 12:08:18.471530 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket.
Jan 29 12:08:18.471541 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket.
Jan 29 12:08:18.471553 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System...
Jan 29 12:08:18.471564 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System...
Jan 29 12:08:18.471575 systemd[1]: Mounting media.mount - External Media Directory...
Jan 29 12:08:18.471595 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System...
Jan 29 12:08:18.471606 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System...
Jan 29 12:08:18.471617 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp...
Jan 29 12:08:18.471628 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 29 12:08:18.471639 systemd[1]: Reached target machines.target - Containers.
Jan 29 12:08:18.471652 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files...
Jan 29 12:08:18.471664 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 12:08:18.471677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes...
Jan 29 12:08:18.471688 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs...
Jan 29 12:08:18.471703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 12:08:18.471716 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 29 12:08:18.471726 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 12:08:18.471737 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse...
Jan 29 12:08:18.471747 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 12:08:18.471760 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf).
Jan 29 12:08:18.471771 systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 29 12:08:18.471782 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device.
Jan 29 12:08:18.471792 systemd[1]: systemd-fsck-usr.service: Deactivated successfully.
Jan 29 12:08:18.471803 systemd[1]: Stopped systemd-fsck-usr.service.
Jan 29 12:08:18.471815 kernel: fuse: init (API version 7.39)
Jan 29 12:08:18.471825 kernel: loop: module loaded
Jan 29 12:08:18.471834 kernel: ACPI: bus type drm_connector registered
Jan 29 12:08:18.471845 systemd[1]: Starting systemd-journald.service - Journal Service...
Jan 29 12:08:18.471858 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules...
Jan 29 12:08:18.471869 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line...
Jan 29 12:08:18.471906 systemd-journald[1103]: Collecting audit messages is disabled.
Jan 29 12:08:18.471932 systemd-journald[1103]: Journal started
Jan 29 12:08:18.471954 systemd-journald[1103]: Runtime Journal (/run/log/journal/960a774ce52443fa93664bdbf89cd58b) is 5.9M, max 47.3M, 41.4M free.
Jan 29 12:08:18.471993 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems...
Jan 29 12:08:18.295658 systemd[1]: Queued start job for default target multi-user.target.
Jan 29 12:08:18.311424 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6.
Jan 29 12:08:18.311780 systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 29 12:08:18.476235 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices...
Jan 29 12:08:18.477970 systemd[1]: verity-setup.service: Deactivated successfully.
Jan 29 12:08:18.477992 systemd[1]: Stopped verity-setup.service.
Jan 29 12:08:18.480890 systemd[1]: Started systemd-journald.service - Journal Service.
Jan 29 12:08:18.481490 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System.
Jan 29 12:08:18.482417 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System.
Jan 29 12:08:18.483353 systemd[1]: Mounted media.mount - External Media Directory.
Jan 29 12:08:18.484245 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System.
Jan 29 12:08:18.485146 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System.
Jan 29 12:08:18.486064 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp.
Jan 29 12:08:18.488627 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes.
Jan 29 12:08:18.489758 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 29 12:08:18.489894 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs.
Jan 29 12:08:18.491001 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 12:08:18.491147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 12:08:18.492360 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 29 12:08:18.492498 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 29 12:08:18.493653 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 12:08:18.493795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 12:08:18.495939 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 29 12:08:18.496074 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse.
Jan 29 12:08:18.497428 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files.
Jan 29 12:08:18.498837 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 12:08:18.498959 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 12:08:18.500288 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules.
Jan 29 12:08:18.502647 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line.
Jan 29 12:08:18.503881 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems.
Jan 29 12:08:18.514383 systemd[1]: Reached target network-pre.target - Preparation for Network.
Jan 29 12:08:18.521788 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System...
Jan 29 12:08:18.523538 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System...
Jan 29 12:08:18.524396 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/).
Jan 29 12:08:18.524431 systemd[1]: Reached target local-fs.target - Local File Systems.
Jan 29 12:08:18.526062 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink).
Jan 29 12:08:18.527883 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown...
Jan 29 12:08:18.530770 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache...
Jan 29 12:08:18.531673 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 12:08:18.532875 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database...
Jan 29 12:08:18.534455 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage...
Jan 29 12:08:18.535523 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 29 12:08:18.538773 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed...
Jan 29 12:08:18.539689 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 29 12:08:18.541904 systemd-journald[1103]: Time spent on flushing to /var/log/journal/960a774ce52443fa93664bdbf89cd58b is 22.743ms for 836 entries.
Jan 29 12:08:18.541904 systemd-journald[1103]: System Journal (/var/log/journal/960a774ce52443fa93664bdbf89cd58b) is 8.0M, max 195.6M, 187.6M free.
Jan 29 12:08:18.572723 systemd-journald[1103]: Received client request to flush runtime journal.
Jan 29 12:08:18.572770 kernel: loop0: detected capacity change from 0 to 114328
Jan 29 12:08:18.542838 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables...
Jan 29 12:08:18.547713 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/...
Jan 29 12:08:18.553771 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully...
Jan 29 12:08:18.556075 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices.
Jan 29 12:08:18.557272 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System.
Jan 29 12:08:18.559774 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System.
Jan 29 12:08:18.560822 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown.
Jan 29 12:08:18.563608 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed.
Jan 29 12:08:18.569867 systemd[1]: Reached target first-boot-complete.target - First Boot Complete.
Jan 29 12:08:18.576621 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher
Jan 29 12:08:18.579198 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk...
Jan 29 12:08:18.582127 systemd-tmpfiles[1152]: ACLs are not supported, ignoring.
Jan 29 12:08:18.582143 systemd-tmpfiles[1152]: ACLs are not supported, ignoring.
Jan 29 12:08:18.582764 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization...
Jan 29 12:08:18.584001 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage.
Jan 29 12:08:18.591208 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables.
Jan 29 12:08:18.594264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully.
Jan 29 12:08:18.607642 kernel: loop1: detected capacity change from 0 to 201592
Jan 29 12:08:18.607775 systemd[1]: Starting systemd-sysusers.service - Create System Users...
Jan 29 12:08:18.609255 systemd[1]: etc-machine\x2did.mount: Deactivated successfully.
Jan 29 12:08:18.610997 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk.
Jan 29 12:08:18.615955 udevadm[1164]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in.
Jan 29 12:08:18.635484 systemd[1]: Finished systemd-sysusers.service - Create System Users.
Jan 29 12:08:18.638288 kernel: loop2: detected capacity change from 0 to 114432
Jan 29 12:08:18.647710 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev...
Jan 29 12:08:18.657531 systemd-tmpfiles[1173]: ACLs are not supported, ignoring.
Jan 29 12:08:18.657550 systemd-tmpfiles[1173]: ACLs are not supported, ignoring.
Jan 29 12:08:18.660690 kernel: loop3: detected capacity change from 0 to 114328
Jan 29 12:08:18.662636 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev.
Jan 29 12:08:18.666631 kernel: loop4: detected capacity change from 0 to 201592
Jan 29 12:08:18.676016 kernel: loop5: detected capacity change from 0 to 114432
Jan 29 12:08:18.681783 (sd-merge)[1176]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'.
Jan 29 12:08:18.682416 (sd-merge)[1176]: Merged extensions into '/usr'.
Jan 29 12:08:18.686165 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)...
Jan 29 12:08:18.686181 systemd[1]: Reloading...
Jan 29 12:08:18.720609 zram_generator::config[1204]: No configuration found.
Jan 29 12:08:18.826351 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 12:08:18.839733 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start.
Jan 29 12:08:18.861993 systemd[1]: Reloading finished in 175 ms.
Jan 29 12:08:18.897626 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache.
Jan 29 12:08:18.898818 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/.
Jan 29 12:08:18.912806 systemd[1]: Starting ensure-sysext.service...
Jan 29 12:08:18.914536 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories...
Jan 29 12:08:18.922194 systemd[1]: Reloading requested from client PID 1239 ('systemctl') (unit ensure-sysext.service)...
Jan 29 12:08:18.922210 systemd[1]: Reloading...
Jan 29 12:08:18.931560 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring.
Jan 29 12:08:18.931872 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring.
Jan 29 12:08:18.932480 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring.
Jan 29 12:08:18.932712 systemd-tmpfiles[1240]: ACLs are not supported, ignoring.
Jan 29 12:08:18.932763 systemd-tmpfiles[1240]: ACLs are not supported, ignoring.
Jan 29 12:08:18.935565 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot.
Jan 29 12:08:18.935690 systemd-tmpfiles[1240]: Skipping /boot
Jan 29 12:08:18.943055 systemd-tmpfiles[1240]: Detected autofs mount point /boot during canonicalization of boot.
Jan 29 12:08:18.943157 systemd-tmpfiles[1240]: Skipping /boot
Jan 29 12:08:18.967615 zram_generator::config[1267]: No configuration found.
Jan 29 12:08:19.054801 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 12:08:19.090139 systemd[1]: Reloading finished in 167 ms.
Jan 29 12:08:19.106625 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database.
Jan 29 12:08:19.118074 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories.
Jan 29 12:08:19.125324 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Jan 29 12:08:19.127649 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs...
Jan 29 12:08:19.129577 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog...
Jan 29 12:08:19.133883 systemd[1]: Starting systemd-resolved.service - Network Name Resolution...
Jan 29 12:08:19.139959 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files...
Jan 29 12:08:19.146054 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP...
Jan 29 12:08:19.149468 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 12:08:19.152471 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 12:08:19.157705 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 12:08:19.163480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 12:08:19.165912 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 12:08:19.166708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 12:08:19.167173 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 12:08:19.169387 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog.
Jan 29 12:08:19.170852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 12:08:19.170978 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 12:08:19.172218 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 12:08:19.172325 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 12:08:19.180825 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP.
Jan 29 12:08:19.180972 systemd-udevd[1309]: Using default interface naming scheme 'v255'.
Jan 29 12:08:19.184166 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 12:08:19.196120 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 12:08:19.201841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 12:08:19.203933 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 12:08:19.204880 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 12:08:19.208857 systemd[1]: Starting systemd-update-done.service - Update is Completed...
Jan 29 12:08:19.215038 systemd[1]: Starting systemd-userdbd.service - User Database Manager...
Jan 29 12:08:19.216571 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files.
Jan 29 12:08:19.218105 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs.
Jan 29 12:08:19.221192 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 12:08:19.221326 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 12:08:19.222656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 12:08:19.222792 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 12:08:19.224197 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 12:08:19.224336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 12:08:19.227327 augenrules[1344]: No rules
Jan 29 12:08:19.227623 systemd[1]: Finished systemd-update-done.service - Update is Completed.
Jan 29 12:08:19.229135 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Jan 29 12:08:19.241315 systemd[1]: Finished ensure-sysext.service.
Jan 29 12:08:19.244242 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met.
Jan 29 12:08:19.250889 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod...
Jan 29 12:08:19.254250 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm...
Jan 29 12:08:19.256838 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore...
Jan 29 12:08:19.260953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop...
Jan 29 12:08:19.261913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 29 12:08:19.265774 systemd[1]: Starting systemd-networkd.service - Network Configuration...
Jan 29 12:08:19.272131 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization...
Jan 29 12:08:19.273668 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt).
Jan 29 12:08:19.273987 systemd[1]: Started systemd-userdbd.service - User Database Manager.
Jan 29 12:08:19.275301 systemd[1]: modprobe@dm_mod.service: Deactivated successfully.
Jan 29 12:08:19.275484 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod.
Jan 29 12:08:19.276960 systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 29 12:08:19.277089 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm.
Jan 29 12:08:19.278357 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 29 12:08:19.278480 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore.
Jan 29 12:08:19.280085 systemd[1]: modprobe@loop.service: Deactivated successfully.
Jan 29 12:08:19.280218 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop.
Jan 29 12:08:19.285926 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped.
Jan 29 12:08:19.286048 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 29 12:08:19.286101 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 29 12:08:19.293756 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1340)
Jan 29 12:08:19.347303 systemd-resolved[1307]: Positive Trust Anchors:
Jan 29 12:08:19.347323 systemd-resolved[1307]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d
Jan 29 12:08:19.347356 systemd-resolved[1307]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test
Jan 29 12:08:19.356941 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization.
Jan 29 12:08:19.357841 systemd-networkd[1378]: lo: Link UP
Jan 29 12:08:19.358110 systemd-networkd[1378]: lo: Gained carrier
Jan 29 12:08:19.358421 systemd[1]: Reached target time-set.target - System Time Set.
Jan 29 12:08:19.359123 systemd-networkd[1378]: Enumeration completed
Jan 29 12:08:19.359676 systemd[1]: Started systemd-networkd.service - Network Configuration.
Jan 29 12:08:19.359871 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 12:08:19.359968 systemd-networkd[1378]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network.
Jan 29 12:08:19.360943 systemd-networkd[1378]: eth0: Link UP
Jan 29 12:08:19.361006 systemd-resolved[1307]: Defaulting to hostname 'linux'.
Jan 29 12:08:19.361065 systemd-networkd[1378]: eth0: Gained carrier
Jan 29 12:08:19.361126 systemd-networkd[1378]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name.
Jan 29 12:08:19.367794 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured...
Jan 29 12:08:19.369209 systemd[1]: Started systemd-resolved.service - Network Name Resolution.
Jan 29 12:08:19.371605 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM.
Jan 29 12:08:19.372911 systemd[1]: Reached target network.target - Network.
Jan 29 12:08:19.373616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups.
Jan 29 12:08:19.373659 systemd-networkd[1378]: eth0: DHCPv4 address 10.0.0.106/16, gateway 10.0.0.1 acquired from 10.0.0.1
Jan 29 12:08:19.374823 systemd-timesyncd[1379]: Network configuration changed, trying to establish connection.
Jan 29 12:08:19.873563 systemd-timesyncd[1379]: Contacted time server 10.0.0.1:123 (10.0.0.1).
Jan 29 12:08:19.873612 systemd-timesyncd[1379]: Initial clock synchronization to Wed 2025-01-29 12:08:19.873397 UTC.
Jan 29 12:08:19.873617 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM...
Jan 29 12:08:19.873650 systemd-resolved[1307]: Clock change detected. Flushing caches.
Jan 29 12:08:19.899692 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup...
Jan 29 12:08:19.901081 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM.
Jan 29 12:08:19.907074 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization.
Jan 29 12:08:19.917615 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes...
Jan 29 12:08:19.947844 lvm[1403]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 29 12:08:19.951425 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup.
Jan 29 12:08:19.984939 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes.
Jan 29 12:08:19.986130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes.
Jan 29 12:08:19.987038 systemd[1]: Reached target sysinit.target - System Initialization.
Jan 29 12:08:19.987931 systemd[1]: Started motdgen.path - Watch for update engine configuration changes.
Jan 29 12:08:19.988858 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data.
Jan 29 12:08:19.989941 systemd[1]: Started logrotate.timer - Daily rotation of log files.
Jan 29 12:08:19.990992 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information..
Jan 29 12:08:19.991930 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories.
Jan 29 12:08:19.992833 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate).
Jan 29 12:08:19.992872 systemd[1]: Reached target paths.target - Path Units.
Jan 29 12:08:19.993532 systemd[1]: Reached target timers.target - Timer Units.
Jan 29 12:08:19.995019 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket.
Jan 29 12:08:19.997198 systemd[1]: Starting docker.socket - Docker Socket for the API...
Jan 29 12:08:20.007639 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket.
Jan 29 12:08:20.009673 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes...
Jan 29 12:08:20.011042 systemd[1]: Listening on docker.socket - Docker Socket for the API.
Jan 29 12:08:20.012011 systemd[1]: Reached target sockets.target - Socket Units.
Jan 29 12:08:20.012798 systemd[1]: Reached target basic.target - Basic System.
Jan 29 12:08:20.013498 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met.
Jan 29 12:08:20.013534 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met.
Jan 29 12:08:20.014500 systemd[1]: Starting containerd.service - containerd container runtime...
Jan 29 12:08:20.016320 systemd[1]: Starting dbus.service - D-Bus System Message Bus...
Jan 29 12:08:20.017827 lvm[1411]:   WARNING: Failed to connect to lvmetad. Falling back to device scanning.
Jan 29 12:08:20.019665 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit...
Jan 29 12:08:20.026460 jq[1414]: false
Jan 29 12:08:20.026950 systemd[1]: Starting extend-filesystems.service - Extend Filesystems...
Jan 29 12:08:20.028166 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment).
Jan 29 12:08:20.029510 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd...
Jan 29 12:08:20.031975 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline...
Jan 29 12:08:20.034834 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys...
Jan 29 12:08:20.041026 systemd[1]: Starting systemd-logind.service - User Login Management...
Jan 29 12:08:20.043243 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0).
Jan 29 12:08:20.043780 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details.
Jan 29 12:08:20.044541 systemd[1]: Starting update-engine.service - Update Engine...
Jan 29 12:08:20.047629 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition...
Jan 29 12:08:20.049901 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes.
Jan 29 12:08:20.054842 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'.
Jan 29 12:08:20.055504 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped.
Jan 29 12:08:20.055829 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully.
Jan 29 12:08:20.055979 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline.
Jan 29 12:08:20.061346 dbus-daemon[1413]: [system] SELinux support is enabled
Jan 29 12:08:20.062796 jq[1426]: true
Jan 29 12:08:20.064111 systemd[1]: Started dbus.service - D-Bus System Message Bus.
Jan 29 12:08:20.074118 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml).
Jan 29 12:08:20.074184 systemd[1]: Reached target system-config.target - Load system-provided cloud configs.
Jan 29 12:08:20.076704 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url).
Jan 29 12:08:20.076738 systemd[1]: Reached target user-config.target - Load user-provided cloud configs.
Jan 29 12:08:20.081644 systemd[1]: motdgen.service: Deactivated successfully.
Jan 29 12:08:20.081872 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd.
Jan 29 12:08:20.086041 jq[1436]: true
Jan 29 12:08:20.088795 extend-filesystems[1415]: Found loop3
Jan 29 12:08:20.088795 extend-filesystems[1415]: Found loop4
Jan 29 12:08:20.088795 extend-filesystems[1415]: Found loop5
Jan 29 12:08:20.102559 update_engine[1423]: I20250129 12:08:20.096083  1423 main.cc:92] Flatcar Update Engine starting
Jan 29 12:08:20.096754 (ntainerd)[1437]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found vda
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found vda1
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found vda2
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found vda3
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found usr
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found vda4
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found vda6
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found vda7
Jan 29 12:08:20.105271 extend-filesystems[1415]: Found vda9
Jan 29 12:08:20.105271 extend-filesystems[1415]: Checking size of /dev/vda9
Jan 29 12:08:20.105408 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 29 12:08:20.130791 update_engine[1423]: I20250129 12:08:20.107481  1423 update_check_scheduler.cc:74] Next update check in 10m8s
Jan 29 12:08:20.130830 extend-filesystems[1415]: Resized partition /dev/vda9
Jan 29 12:08:20.107530 systemd[1]: Started update-engine.service - Update Engine.
Jan 29 12:08:20.107687 systemd-logind[1422]: New seat seat0.
Jan 29 12:08:20.118863 systemd[1]: Started systemd-logind.service - User Login Management.
Jan 29 12:08:20.124727 systemd[1]: Started locksmithd.service - Cluster reboot manager.
Jan 29 12:08:20.135447 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1350)
Jan 29 12:08:20.145570 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024)
Jan 29 12:08:20.153448 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks
Jan 29 12:08:20.196466 kernel: EXT4-fs (vda9): resized filesystem to 1864699
Jan 29 12:08:20.197819 locksmithd[1458]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot"
Jan 29 12:08:20.219247 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required
Jan 29 12:08:20.219247 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1
Jan 29 12:08:20.219247 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long.
Jan 29 12:08:20.226324 extend-filesystems[1415]: Resized filesystem in /dev/vda9
Jan 29 12:08:20.227024 bash[1460]: Updated "/home/core/.ssh/authorized_keys"
Jan 29 12:08:20.221714 systemd[1]: extend-filesystems.service: Deactivated successfully.
Jan 29 12:08:20.221938 systemd[1]: Finished extend-filesystems.service - Extend Filesystems.
Jan 29 12:08:20.228446 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition.
Jan 29 12:08:20.230157 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met.
Jan 29 12:08:20.316428 containerd[1437]: time="2025-01-29T12:08:20.315333571Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21
Jan 29 12:08:20.339942 containerd[1437]: time="2025-01-29T12:08:20.339864131Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
Jan 29 12:08:20.341421 containerd[1437]: time="2025-01-29T12:08:20.341365091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1
Jan 29 12:08:20.341421 containerd[1437]: time="2025-01-29T12:08:20.341406371Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
Jan 29 12:08:20.341467 containerd[1437]: time="2025-01-29T12:08:20.341436011Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
Jan 29 12:08:20.341665 containerd[1437]: time="2025-01-29T12:08:20.341631971Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
Jan 29 12:08:20.341665 containerd[1437]: time="2025-01-29T12:08:20.341659331Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
Jan 29 12:08:20.341744 containerd[1437]: time="2025-01-29T12:08:20.341718091Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 12:08:20.341744 containerd[1437]: time="2025-01-29T12:08:20.341736571Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
Jan 29 12:08:20.341925 containerd[1437]: time="2025-01-29T12:08:20.341895491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 12:08:20.341925 containerd[1437]: time="2025-01-29T12:08:20.341917531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
Jan 29 12:08:20.341970 containerd[1437]: time="2025-01-29T12:08:20.341931171Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 12:08:20.341970 containerd[1437]: time="2025-01-29T12:08:20.341941051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
Jan 29 12:08:20.342021 containerd[1437]: time="2025-01-29T12:08:20.342006531Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
Jan 29 12:08:20.342241 containerd[1437]: time="2025-01-29T12:08:20.342211771Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
Jan 29 12:08:20.342335 containerd[1437]: time="2025-01-29T12:08:20.342317491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
Jan 29 12:08:20.342356 containerd[1437]: time="2025-01-29T12:08:20.342335251Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
Jan 29 12:08:20.342442 containerd[1437]: time="2025-01-29T12:08:20.342410251Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
Jan 29 12:08:20.342491 containerd[1437]: time="2025-01-29T12:08:20.342477931Z" level=info msg="metadata content store policy set" policy=shared
Jan 29 12:08:20.345913 containerd[1437]: time="2025-01-29T12:08:20.345872571Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
Jan 29 12:08:20.345965 containerd[1437]: time="2025-01-29T12:08:20.345926851Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
Jan 29 12:08:20.345965 containerd[1437]: time="2025-01-29T12:08:20.345943011Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
Jan 29 12:08:20.345965 containerd[1437]: time="2025-01-29T12:08:20.345958731Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
Jan 29 12:08:20.346015 containerd[1437]: time="2025-01-29T12:08:20.345974091Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
Jan 29 12:08:20.346156 containerd[1437]: time="2025-01-29T12:08:20.346122291Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
Jan 29 12:08:20.346370 containerd[1437]: time="2025-01-29T12:08:20.346343731Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
Jan 29 12:08:20.346488 containerd[1437]: time="2025-01-29T12:08:20.346464211Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
Jan 29 12:08:20.346517 containerd[1437]: time="2025-01-29T12:08:20.346489371Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
Jan 29 12:08:20.346517 containerd[1437]: time="2025-01-29T12:08:20.346502811Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
Jan 29 12:08:20.346554 containerd[1437]: time="2025-01-29T12:08:20.346526371Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
Jan 29 12:08:20.346554 containerd[1437]: time="2025-01-29T12:08:20.346540531Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
Jan 29 12:08:20.346596 containerd[1437]: time="2025-01-29T12:08:20.346553131Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
Jan 29 12:08:20.346596 containerd[1437]: time="2025-01-29T12:08:20.346567131Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
Jan 29 12:08:20.346596 containerd[1437]: time="2025-01-29T12:08:20.346581091Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
Jan 29 12:08:20.346644 containerd[1437]: time="2025-01-29T12:08:20.346595571Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
Jan 29 12:08:20.346644 containerd[1437]: time="2025-01-29T12:08:20.346609291Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
Jan 29 12:08:20.346644 containerd[1437]: time="2025-01-29T12:08:20.346620971Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
Jan 29 12:08:20.346644 containerd[1437]: time="2025-01-29T12:08:20.346639611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346710 containerd[1437]: time="2025-01-29T12:08:20.346660051Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346710 containerd[1437]: time="2025-01-29T12:08:20.346672691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346710 containerd[1437]: time="2025-01-29T12:08:20.346685131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346710 containerd[1437]: time="2025-01-29T12:08:20.346697251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346784 containerd[1437]: time="2025-01-29T12:08:20.346712771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346784 containerd[1437]: time="2025-01-29T12:08:20.346725451Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346784 containerd[1437]: time="2025-01-29T12:08:20.346738091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346784 containerd[1437]: time="2025-01-29T12:08:20.346750251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346784 containerd[1437]: time="2025-01-29T12:08:20.346764331Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346784 containerd[1437]: time="2025-01-29T12:08:20.346776771Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346887 containerd[1437]: time="2025-01-29T12:08:20.346787811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346887 containerd[1437]: time="2025-01-29T12:08:20.346804611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346887 containerd[1437]: time="2025-01-29T12:08:20.346826731Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
Jan 29 12:08:20.346887 containerd[1437]: time="2025-01-29T12:08:20.346846851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346887 containerd[1437]: time="2025-01-29T12:08:20.346859251Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.346887 containerd[1437]: time="2025-01-29T12:08:20.346869531Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
Jan 29 12:08:20.347604 containerd[1437]: time="2025-01-29T12:08:20.347568491Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
Jan 29 12:08:20.347638 containerd[1437]: time="2025-01-29T12:08:20.347603211Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
Jan 29 12:08:20.347638 containerd[1437]: time="2025-01-29T12:08:20.347615411Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
Jan 29 12:08:20.347638 containerd[1437]: time="2025-01-29T12:08:20.347627691Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
Jan 29 12:08:20.347693 containerd[1437]: time="2025-01-29T12:08:20.347641651Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.347693 containerd[1437]: time="2025-01-29T12:08:20.347656611Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
Jan 29 12:08:20.347693 containerd[1437]: time="2025-01-29T12:08:20.347667971Z" level=info msg="NRI interface is disabled by configuration."
Jan 29 12:08:20.347693 containerd[1437]: time="2025-01-29T12:08:20.347680091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 29 12:08:20.348018 containerd[1437]: time="2025-01-29T12:08:20.347953331Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}"
Jan 29 12:08:20.348018 containerd[1437]: time="2025-01-29T12:08:20.348018211Z" level=info msg="Connect containerd service"
Jan 29 12:08:20.348154 containerd[1437]: time="2025-01-29T12:08:20.348053051Z" level=info msg="using legacy CRI server"
Jan 29 12:08:20.348154 containerd[1437]: time="2025-01-29T12:08:20.348060731Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this"
Jan 29 12:08:20.348154 containerd[1437]: time="2025-01-29T12:08:20.348149291Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\""
Jan 29 12:08:20.348929 containerd[1437]: time="2025-01-29T12:08:20.348883771Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349184531Z" level=info msg="Start subscribing containerd event"
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349240851Z" level=info msg="Start recovering state"
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349307531Z" level=info msg="Start event monitor"
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349318451Z" level=info msg="Start snapshots syncer"
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349327571Z" level=info msg="Start cni network conf syncer for default"
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349335491Z" level=info msg="Start streaming server"
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349397091Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349455891Z" level=info msg=serving... address=/run/containerd/containerd.sock
Jan 29 12:08:20.351324 containerd[1437]: time="2025-01-29T12:08:20.349502931Z" level=info msg="containerd successfully booted in 0.035876s"
Jan 29 12:08:20.349625 systemd[1]: Started containerd.service - containerd container runtime.
Jan 29 12:08:20.642901 sshd_keygen[1435]: ssh-keygen: generating new host keys: RSA ECDSA ED25519
Jan 29 12:08:20.666513 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys.
Jan 29 12:08:20.677710 systemd[1]: Starting issuegen.service - Generate /run/issue...
Jan 29 12:08:20.683630 systemd[1]: issuegen.service: Deactivated successfully.
Jan 29 12:08:20.683828 systemd[1]: Finished issuegen.service - Generate /run/issue.
Jan 29 12:08:20.686288 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions...
Jan 29 12:08:20.700500 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions.
Jan 29 12:08:20.703575 systemd[1]: Started getty@tty1.service - Getty on tty1.
Jan 29 12:08:20.705759 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0.
Jan 29 12:08:20.706852 systemd[1]: Reached target getty.target - Login Prompts.
Jan 29 12:08:21.007537 systemd-networkd[1378]: eth0: Gained IPv6LL
Jan 29 12:08:21.010546 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured.
Jan 29 12:08:21.012265 systemd[1]: Reached target network-online.target - Network is Online.
Jan 29 12:08:21.031670 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Jan 29 12:08:21.034011 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 12:08:21.035841 systemd[1]: Starting nvidia.service - NVIDIA Configure Service...
Jan 29 12:08:21.052192 systemd[1]: coreos-metadata.service: Deactivated successfully.
Jan 29 12:08:21.052449 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Jan 29 12:08:21.053937 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met.
Jan 29 12:08:21.060465 systemd[1]: Finished nvidia.service - NVIDIA Configure Service.
Jan 29 12:08:21.557928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 12:08:21.559201 systemd[1]: Reached target multi-user.target - Multi-User System.
Jan 29 12:08:21.560124 systemd[1]: Startup finished in 548ms (kernel) + 4.230s (initrd) + 3.164s (userspace) = 7.943s.
Jan 29 12:08:21.562948 (kubelet)[1518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS
Jan 29 12:08:21.972957 kubelet[1518]: E0129 12:08:21.972759    1518 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory"
Jan 29 12:08:21.975205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
Jan 29 12:08:21.975343 systemd[1]: kubelet.service: Failed with result 'exit-code'.
Jan 29 12:08:27.356050 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd.
Jan 29 12:08:27.357137 systemd[1]: Started sshd@0-10.0.0.106:22-10.0.0.1:53660.service - OpenSSH per-connection server daemon (10.0.0.1:53660).
Jan 29 12:08:27.411763 sshd[1531]: Accepted publickey for core from 10.0.0.1 port 53660 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44
Jan 29 12:08:27.413497 sshd[1531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 12:08:27.421473 systemd-logind[1422]: New session 1 of user core.
Jan 29 12:08:27.422385 systemd[1]: Created slice user-500.slice - User Slice of UID 500.
Jan 29 12:08:27.435638 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500...
Jan 29 12:08:27.444598 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500.
Jan 29 12:08:27.448731 systemd[1]: Starting user@500.service - User Manager for UID 500...
Jan 29 12:08:27.453614 (systemd)[1535]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0)
Jan 29 12:08:27.524603 systemd[1535]: Queued start job for default target default.target.
Jan 29 12:08:27.534363 systemd[1535]: Created slice app.slice - User Application Slice.
Jan 29 12:08:27.534408 systemd[1535]: Reached target paths.target - Paths.
Jan 29 12:08:27.534451 systemd[1535]: Reached target timers.target - Timers.
Jan 29 12:08:27.535726 systemd[1535]: Starting dbus.socket - D-Bus User Message Bus Socket...
Jan 29 12:08:27.545446 systemd[1535]: Listening on dbus.socket - D-Bus User Message Bus Socket.
Jan 29 12:08:27.545509 systemd[1535]: Reached target sockets.target - Sockets.
Jan 29 12:08:27.545522 systemd[1535]: Reached target basic.target - Basic System.
Jan 29 12:08:27.545558 systemd[1535]: Reached target default.target - Main User Target.
Jan 29 12:08:27.545585 systemd[1535]: Startup finished in 86ms.
Jan 29 12:08:27.545812 systemd[1]: Started user@500.service - User Manager for UID 500.
Jan 29 12:08:27.547104 systemd[1]: Started session-1.scope - Session 1 of User core.
Jan 29 12:08:27.608794 systemd[1]: Started sshd@1-10.0.0.106:22-10.0.0.1:53672.service - OpenSSH per-connection server daemon (10.0.0.1:53672).
Jan 29 12:08:27.651911 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 53672 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44
Jan 29 12:08:27.653242 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 12:08:27.657003 systemd-logind[1422]: New session 2 of user core.
Jan 29 12:08:27.666592 systemd[1]: Started session-2.scope - Session 2 of User core.
Jan 29 12:08:27.717699 sshd[1546]: pam_unix(sshd:session): session closed for user core
Jan 29 12:08:27.731686 systemd[1]: sshd@1-10.0.0.106:22-10.0.0.1:53672.service: Deactivated successfully.
Jan 29 12:08:27.733004 systemd[1]: session-2.scope: Deactivated successfully.
Jan 29 12:08:27.734169 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit.
Jan 29 12:08:27.735262 systemd[1]: Started sshd@2-10.0.0.106:22-10.0.0.1:53684.service - OpenSSH per-connection server daemon (10.0.0.1:53684).
Jan 29 12:08:27.735980 systemd-logind[1422]: Removed session 2.
Jan 29 12:08:27.770866 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 53684 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44
Jan 29 12:08:27.772183 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 12:08:27.776251 systemd-logind[1422]: New session 3 of user core.
Jan 29 12:08:27.786586 systemd[1]: Started session-3.scope - Session 3 of User core.
Jan 29 12:08:27.836274 sshd[1553]: pam_unix(sshd:session): session closed for user core
Jan 29 12:08:27.846686 systemd[1]: sshd@2-10.0.0.106:22-10.0.0.1:53684.service: Deactivated successfully.
Jan 29 12:08:27.848029 systemd[1]: session-3.scope: Deactivated successfully.
Jan 29 12:08:27.850404 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit.
Jan 29 12:08:27.851524 systemd[1]: Started sshd@3-10.0.0.106:22-10.0.0.1:53692.service - OpenSSH per-connection server daemon (10.0.0.1:53692).
Jan 29 12:08:27.852235 systemd-logind[1422]: Removed session 3.
Jan 29 12:08:27.887160 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 53692 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44
Jan 29 12:08:27.888280 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 12:08:27.892336 systemd-logind[1422]: New session 4 of user core.
Jan 29 12:08:27.902635 systemd[1]: Started session-4.scope - Session 4 of User core.
Jan 29 12:08:27.955403 sshd[1560]: pam_unix(sshd:session): session closed for user core
Jan 29 12:08:27.963654 systemd[1]: sshd@3-10.0.0.106:22-10.0.0.1:53692.service: Deactivated successfully.
Jan 29 12:08:27.965038 systemd[1]: session-4.scope: Deactivated successfully.
Jan 29 12:08:27.966175 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit.
Jan 29 12:08:27.967241 systemd[1]: Started sshd@4-10.0.0.106:22-10.0.0.1:53702.service - OpenSSH per-connection server daemon (10.0.0.1:53702).
Jan 29 12:08:27.968081 systemd-logind[1422]: Removed session 4.
Jan 29 12:08:28.002830 sshd[1567]: Accepted publickey for core from 10.0.0.1 port 53702 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44
Jan 29 12:08:28.004013 sshd[1567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 12:08:28.007477 systemd-logind[1422]: New session 5 of user core.
Jan 29 12:08:28.017634 systemd[1]: Started session-5.scope - Session 5 of User core.
Jan 29 12:08:28.074472 sudo[1570]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1
Jan 29 12:08:28.074735 sudo[1570]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 12:08:28.093154 sudo[1570]: pam_unix(sudo:session): session closed for user root
Jan 29 12:08:28.094754 sshd[1567]: pam_unix(sshd:session): session closed for user core
Jan 29 12:08:28.109578 systemd[1]: sshd@4-10.0.0.106:22-10.0.0.1:53702.service: Deactivated successfully.
Jan 29 12:08:28.110835 systemd[1]: session-5.scope: Deactivated successfully.
Jan 29 12:08:28.112524 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit.
Jan 29 12:08:28.129724 systemd[1]: Started sshd@5-10.0.0.106:22-10.0.0.1:53710.service - OpenSSH per-connection server daemon (10.0.0.1:53710).
Jan 29 12:08:28.130503 systemd-logind[1422]: Removed session 5.
Jan 29 12:08:28.161718 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 53710 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44
Jan 29 12:08:28.162732 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 12:08:28.166198 systemd-logind[1422]: New session 6 of user core.
Jan 29 12:08:28.176577 systemd[1]: Started session-6.scope - Session 6 of User core.
Jan 29 12:08:28.226708 sudo[1579]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules
Jan 29 12:08:28.226975 sudo[1579]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 12:08:28.229770 sudo[1579]: pam_unix(sudo:session): session closed for user root
Jan 29 12:08:28.233986 sudo[1578]:     core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules
Jan 29 12:08:28.234474 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 12:08:28.251676 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules...
Jan 29 12:08:28.252780 auditctl[1582]: No rules
Jan 29 12:08:28.253602 systemd[1]: audit-rules.service: Deactivated successfully.
Jan 29 12:08:28.253795 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules.
Jan 29 12:08:28.255321 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules...
Jan 29 12:08:28.277674 augenrules[1600]: No rules
Jan 29 12:08:28.279529 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules.
Jan 29 12:08:28.280434 sudo[1578]: pam_unix(sudo:session): session closed for user root
Jan 29 12:08:28.281751 sshd[1575]: pam_unix(sshd:session): session closed for user core
Jan 29 12:08:28.291624 systemd[1]: sshd@5-10.0.0.106:22-10.0.0.1:53710.service: Deactivated successfully.
Jan 29 12:08:28.292976 systemd[1]: session-6.scope: Deactivated successfully.
Jan 29 12:08:28.294078 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit.
Jan 29 12:08:28.295101 systemd[1]: Started sshd@6-10.0.0.106:22-10.0.0.1:53714.service - OpenSSH per-connection server daemon (10.0.0.1:53714).
Jan 29 12:08:28.295823 systemd-logind[1422]: Removed session 6.
Jan 29 12:08:28.330514 sshd[1608]: Accepted publickey for core from 10.0.0.1 port 53714 ssh2: RSA SHA256:GGDajpEHkKMMPS5XYOx6gDtGUu+BwzJk0riZNzWzV44
Jan 29 12:08:28.331783 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0)
Jan 29 12:08:28.335693 systemd-logind[1422]: New session 7 of user core.
Jan 29 12:08:28.345578 systemd[1]: Started session-7.scope - Session 7 of User core.
Jan 29 12:08:28.395642 sudo[1611]:     core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh
Jan 29 12:08:28.395916 sudo[1611]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500)
Jan 29 12:08:28.417682 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent...
Jan 29 12:08:28.431620 systemd[1]: coreos-metadata.service: Deactivated successfully.
Jan 29 12:08:28.431780 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent.
Jan 29 12:08:28.852517 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 12:08:28.864623 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 12:08:28.882800 systemd[1]: Reloading requested from client PID 1652 ('systemctl') (unit session-7.scope)...
Jan 29 12:08:28.882816 systemd[1]: Reloading...
Jan 29 12:08:28.945439 zram_generator::config[1694]: No configuration found.
Jan 29 12:08:29.119098 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly.
Jan 29 12:08:29.170544 systemd[1]: Reloading finished in 287 ms.
Jan 29 12:08:29.209832 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 12:08:29.212219 systemd[1]: kubelet.service: Deactivated successfully.
Jan 29 12:08:29.212488 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 12:08:29.214207 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent...
Jan 29 12:08:29.309954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
Jan 29 12:08:29.313768 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS
Jan 29 12:08:29.346068 kubelet[1737]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 29 12:08:29.346068 kubelet[1737]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI.
Jan 29 12:08:29.346068 kubelet[1737]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
Jan 29 12:08:29.346374 kubelet[1737]: I0129 12:08:29.346115    1737 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime"
Jan 29 12:08:31.608039 kubelet[1737]: I0129 12:08:31.607982    1737 server.go:520] "Kubelet version" kubeletVersion="v1.32.0"
Jan 29 12:08:31.608039 kubelet[1737]: I0129 12:08:31.608019    1737 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
Jan 29 12:08:31.608554 kubelet[1737]: I0129 12:08:31.608284    1737 server.go:954] "Client rotation is on, will bootstrap in background"
Jan 29 12:08:31.654166 kubelet[1737]: I0129 12:08:31.654124    1737 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Jan 29 12:08:31.667653 kubelet[1737]: E0129 12:08:31.667601    1737 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
Jan 29 12:08:31.667653 kubelet[1737]: I0129 12:08:31.667649    1737 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config."
Jan 29 12:08:31.671158 kubelet[1737]: I0129 12:08:31.671076    1737 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
Jan 29 12:08:31.671940 kubelet[1737]: I0129 12:08:31.671888    1737 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Jan 29 12:08:31.672095 kubelet[1737]: I0129 12:08:31.671930    1737 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.106","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2}
Jan 29 12:08:31.672173 kubelet[1737]: I0129 12:08:31.672156    1737 topology_manager.go:138] "Creating topology manager with none policy"
Jan 29 12:08:31.672173 kubelet[1737]: I0129 12:08:31.672164    1737 container_manager_linux.go:304] "Creating device plugin manager"
Jan 29 12:08:31.672386 kubelet[1737]: I0129 12:08:31.672353    1737 state_mem.go:36] "Initialized new in-memory state store"
Jan 29 12:08:31.677892 kubelet[1737]: I0129 12:08:31.677861    1737 kubelet.go:446] "Attempting to sync node with API server"
Jan 29 12:08:31.677892 kubelet[1737]: I0129 12:08:31.677891    1737 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests"
Jan 29 12:08:31.677989 kubelet[1737]: I0129 12:08:31.677910    1737 kubelet.go:352] "Adding apiserver pod source"
Jan 29 12:08:31.677989 kubelet[1737]: I0129 12:08:31.677919    1737 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Jan 29 12:08:31.678594 kubelet[1737]: E0129 12:08:31.678392    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:31.679179 kubelet[1737]: E0129 12:08:31.679113    1737 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:31.682337 kubelet[1737]: I0129 12:08:31.682318    1737 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1"
Jan 29 12:08:31.683005 kubelet[1737]: I0129 12:08:31.682948    1737 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
Jan 29 12:08:31.683148 kubelet[1737]: W0129 12:08:31.683134    1737 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
Jan 29 12:08:31.684015 kubelet[1737]: I0129 12:08:31.683996    1737 watchdog_linux.go:99] "Systemd watchdog is not enabled"
Jan 29 12:08:31.684069 kubelet[1737]: I0129 12:08:31.684033    1737 server.go:1287] "Started kubelet"
Jan 29 12:08:31.684848 kubelet[1737]: I0129 12:08:31.684136    1737 server.go:169] "Starting to listen" address="0.0.0.0" port=10250
Jan 29 12:08:31.686263 kubelet[1737]: I0129 12:08:31.685430    1737 server.go:490] "Adding debug handlers to kubelet server"
Jan 29 12:08:31.686263 kubelet[1737]: W0129 12:08:31.685651    1737 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.106" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope
Jan 29 12:08:31.686263 kubelet[1737]: E0129 12:08:31.685696    1737 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.106\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
Jan 29 12:08:31.686263 kubelet[1737]: W0129 12:08:31.685725    1737 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope
Jan 29 12:08:31.686263 kubelet[1737]: E0129 12:08:31.685738    1737 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
Jan 29 12:08:31.686443 kubelet[1737]: I0129 12:08:31.686355    1737 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Jan 29 12:08:31.687448 kubelet[1737]: I0129 12:08:31.686466    1737 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
Jan 29 12:08:31.687448 kubelet[1737]: E0129 12:08:31.686523    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:31.687448 kubelet[1737]: I0129 12:08:31.686544    1737 volume_manager.go:297] "Starting Kubelet Volume Manager"
Jan 29 12:08:31.687448 kubelet[1737]: I0129 12:08:31.686676    1737 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
Jan 29 12:08:31.687448 kubelet[1737]: I0129 12:08:31.686735    1737 reconciler.go:26] "Reconciler: start to sync state"
Jan 29 12:08:31.687448 kubelet[1737]: I0129 12:08:31.686331    1737 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
Jan 29 12:08:31.687448 kubelet[1737]: I0129 12:08:31.687112    1737 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
Jan 29 12:08:31.690009 kubelet[1737]: E0129 12:08:31.689971    1737 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem"
Jan 29 12:08:31.691016 kubelet[1737]: I0129 12:08:31.690459    1737 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
Jan 29 12:08:31.691796 kubelet[1737]: I0129 12:08:31.691779    1737 factory.go:221] Registration of the containerd container factory successfully
Jan 29 12:08:31.691796 kubelet[1737]: I0129 12:08:31.691795    1737 factory.go:221] Registration of the systemd container factory successfully
Jan 29 12:08:31.701998 kubelet[1737]: E0129 12:08:31.701967    1737 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.106\" not found" node="10.0.0.106"
Jan 29 12:08:31.702340 kubelet[1737]: I0129 12:08:31.702327    1737 cpu_manager.go:221] "Starting CPU manager" policy="none"
Jan 29 12:08:31.702340 kubelet[1737]: I0129 12:08:31.702339    1737 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s"
Jan 29 12:08:31.702408 kubelet[1737]: I0129 12:08:31.702356    1737 state_mem.go:36] "Initialized new in-memory state store"
Jan 29 12:08:31.704111 kubelet[1737]: I0129 12:08:31.704088    1737 policy_none.go:49] "None policy: Start"
Jan 29 12:08:31.704111 kubelet[1737]: I0129 12:08:31.704114    1737 memory_manager.go:186] "Starting memorymanager" policy="None"
Jan 29 12:08:31.704218 kubelet[1737]: I0129 12:08:31.704126    1737 state_mem.go:35] "Initializing new in-memory state store"
Jan 29 12:08:31.710102 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice.
Jan 29 12:08:31.718995 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice.
Jan 29 12:08:31.722423 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice.
Jan 29 12:08:31.734137 kubelet[1737]: I0129 12:08:31.734113    1737 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Jan 29 12:08:31.734727 kubelet[1737]: I0129 12:08:31.734468    1737 eviction_manager.go:189] "Eviction manager: starting control loop"
Jan 29 12:08:31.734727 kubelet[1737]: I0129 12:08:31.734487    1737 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
Jan 29 12:08:31.735041 kubelet[1737]: I0129 12:08:31.735014    1737 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
Jan 29 12:08:31.736072 kubelet[1737]: E0129 12:08:31.736052    1737 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime"
Jan 29 12:08:31.736154 kubelet[1737]: E0129 12:08:31.736088    1737 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.106\" not found"
Jan 29 12:08:31.739557 kubelet[1737]: I0129 12:08:31.739430    1737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
Jan 29 12:08:31.740316 kubelet[1737]: I0129 12:08:31.740297    1737 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
Jan 29 12:08:31.740397 kubelet[1737]: I0129 12:08:31.740386    1737 status_manager.go:227] "Starting to sync pod status with apiserver"
Jan 29 12:08:31.740554 kubelet[1737]: I0129 12:08:31.740541    1737 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
Jan 29 12:08:31.740751 kubelet[1737]: I0129 12:08:31.740733    1737 kubelet.go:2388] "Starting kubelet main sync loop"
Jan 29 12:08:31.740972 kubelet[1737]: E0129 12:08:31.740940    1737 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful"
Jan 29 12:08:31.835315 kubelet[1737]: I0129 12:08:31.835274    1737 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.106"
Jan 29 12:08:31.844796 kubelet[1737]: I0129 12:08:31.844768    1737 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.106"
Jan 29 12:08:31.845036 kubelet[1737]: E0129 12:08:31.844903    1737 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.0.0.106\": node \"10.0.0.106\" not found"
Jan 29 12:08:31.854347 kubelet[1737]: E0129 12:08:31.854311    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:31.955479 kubelet[1737]: E0129 12:08:31.955339    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.055973 kubelet[1737]: E0129 12:08:32.055935    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.156779 kubelet[1737]: E0129 12:08:32.156741    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.257740 kubelet[1737]: E0129 12:08:32.257635    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.334257 sudo[1611]: pam_unix(sudo:session): session closed for user root
Jan 29 12:08:32.335963 sshd[1608]: pam_unix(sshd:session): session closed for user core
Jan 29 12:08:32.338498 systemd[1]: sshd@6-10.0.0.106:22-10.0.0.1:53714.service: Deactivated successfully.
Jan 29 12:08:32.339900 systemd[1]: session-7.scope: Deactivated successfully.
Jan 29 12:08:32.341070 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit.
Jan 29 12:08:32.341993 systemd-logind[1422]: Removed session 7.
Jan 29 12:08:32.358233 kubelet[1737]: E0129 12:08:32.358192    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.458879 kubelet[1737]: E0129 12:08:32.458842    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.559605 kubelet[1737]: E0129 12:08:32.559492    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.610275 kubelet[1737]: I0129 12:08:32.610221    1737 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials"
Jan 29 12:08:32.610647 kubelet[1737]: W0129 12:08:32.610483    1737 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 29 12:08:32.610647 kubelet[1737]: W0129 12:08:32.610483    1737 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received
Jan 29 12:08:32.660590 kubelet[1737]: E0129 12:08:32.660545    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.678725 kubelet[1737]: E0129 12:08:32.678697    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:32.761511 kubelet[1737]: E0129 12:08:32.761467    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.863404 kubelet[1737]: E0129 12:08:32.862126    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:32.962813 kubelet[1737]: E0129 12:08:32.962754    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:33.063539 kubelet[1737]: E0129 12:08:33.063479    1737 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.106\" not found"
Jan 29 12:08:33.164358 kubelet[1737]: I0129 12:08:33.164233    1737 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24"
Jan 29 12:08:33.164755 containerd[1437]: time="2025-01-29T12:08:33.164661411Z" level=info msg="No cni config template is specified, wait for other system components to drop the config."
Jan 29 12:08:33.165031 kubelet[1737]: I0129 12:08:33.164913    1737 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24"
Jan 29 12:08:33.679495 kubelet[1737]: I0129 12:08:33.679462    1737 apiserver.go:52] "Watching apiserver"
Jan 29 12:08:33.679495 kubelet[1737]: E0129 12:08:33.679477    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:33.683730 kubelet[1737]: E0129 12:08:33.683618    1737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lwv" podUID="9851e09c-960e-4f3c-998e-b4757588d7ae"
Jan 29 12:08:33.687923 kubelet[1737]: I0129 12:08:33.687894    1737 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
Jan 29 12:08:33.688442 systemd[1]: Created slice kubepods-besteffort-pod4e60329d_9540_43b1_86d2_3e2e40617e27.slice - libcontainer container kubepods-besteffort-pod4e60329d_9540_43b1_86d2_3e2e40617e27.slice.
Jan 29 12:08:33.698150 kubelet[1737]: I0129 12:08:33.698115    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-cni-net-dir\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698150 kubelet[1737]: I0129 12:08:33.698146    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-cni-log-dir\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698260 kubelet[1737]: I0129 12:08:33.698167    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-flexvol-driver-host\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698260 kubelet[1737]: I0129 12:08:33.698185    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-lib-modules\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698260 kubelet[1737]: I0129 12:08:33.698202    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-xtables-lock\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698260 kubelet[1737]: I0129 12:08:33.698217    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f76dd26-b28a-4510-9930-c20b2ead284c-tigera-ca-bundle\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698260 kubelet[1737]: I0129 12:08:33.698232    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-var-run-calico\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698378 kubelet[1737]: I0129 12:08:33.698257    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-var-lib-calico\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698378 kubelet[1737]: I0129 12:08:33.698273    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4e60329d-9540-43b1-86d2-3e2e40617e27-kube-proxy\") pod \"kube-proxy-zzkv9\" (UID: \"4e60329d-9540-43b1-86d2-3e2e40617e27\") " pod="kube-system/kube-proxy-zzkv9"
Jan 29 12:08:33.698378 kubelet[1737]: I0129 12:08:33.698289    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e60329d-9540-43b1-86d2-3e2e40617e27-xtables-lock\") pod \"kube-proxy-zzkv9\" (UID: \"4e60329d-9540-43b1-86d2-3e2e40617e27\") " pod="kube-system/kube-proxy-zzkv9"
Jan 29 12:08:33.698378 kubelet[1737]: I0129 12:08:33.698306    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbs7h\" (UniqueName: \"kubernetes.io/projected/4e60329d-9540-43b1-86d2-3e2e40617e27-kube-api-access-mbs7h\") pod \"kube-proxy-zzkv9\" (UID: \"4e60329d-9540-43b1-86d2-3e2e40617e27\") " pod="kube-system/kube-proxy-zzkv9"
Jan 29 12:08:33.698378 kubelet[1737]: I0129 12:08:33.698319    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-policysync\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698578 kubelet[1737]: I0129 12:08:33.698334    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9851e09c-960e-4f3c-998e-b4757588d7ae-varrun\") pod \"csi-node-driver-w8lwv\" (UID: \"9851e09c-960e-4f3c-998e-b4757588d7ae\") " pod="calico-system/csi-node-driver-w8lwv"
Jan 29 12:08:33.698578 kubelet[1737]: I0129 12:08:33.698365    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9851e09c-960e-4f3c-998e-b4757588d7ae-socket-dir\") pod \"csi-node-driver-w8lwv\" (UID: \"9851e09c-960e-4f3c-998e-b4757588d7ae\") " pod="calico-system/csi-node-driver-w8lwv"
Jan 29 12:08:33.698578 kubelet[1737]: I0129 12:08:33.698379    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9851e09c-960e-4f3c-998e-b4757588d7ae-registration-dir\") pod \"csi-node-driver-w8lwv\" (UID: \"9851e09c-960e-4f3c-998e-b4757588d7ae\") " pod="calico-system/csi-node-driver-w8lwv"
Jan 29 12:08:33.698578 kubelet[1737]: I0129 12:08:33.698397    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/0f76dd26-b28a-4510-9930-c20b2ead284c-node-certs\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698578 kubelet[1737]: I0129 12:08:33.698429    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/0f76dd26-b28a-4510-9930-c20b2ead284c-cni-bin-dir\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698734 kubelet[1737]: I0129 12:08:33.698446    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2s6\" (UniqueName: \"kubernetes.io/projected/0f76dd26-b28a-4510-9930-c20b2ead284c-kube-api-access-qk2s6\") pod \"calico-node-hxkr9\" (UID: \"0f76dd26-b28a-4510-9930-c20b2ead284c\") " pod="calico-system/calico-node-hxkr9"
Jan 29 12:08:33.698734 kubelet[1737]: I0129 12:08:33.698485    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9851e09c-960e-4f3c-998e-b4757588d7ae-kubelet-dir\") pod \"csi-node-driver-w8lwv\" (UID: \"9851e09c-960e-4f3c-998e-b4757588d7ae\") " pod="calico-system/csi-node-driver-w8lwv"
Jan 29 12:08:33.698734 kubelet[1737]: I0129 12:08:33.698515    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pc5m\" (UniqueName: \"kubernetes.io/projected/9851e09c-960e-4f3c-998e-b4757588d7ae-kube-api-access-8pc5m\") pod \"csi-node-driver-w8lwv\" (UID: \"9851e09c-960e-4f3c-998e-b4757588d7ae\") " pod="calico-system/csi-node-driver-w8lwv"
Jan 29 12:08:33.698734 kubelet[1737]: I0129 12:08:33.698541    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e60329d-9540-43b1-86d2-3e2e40617e27-lib-modules\") pod \"kube-proxy-zzkv9\" (UID: \"4e60329d-9540-43b1-86d2-3e2e40617e27\") " pod="kube-system/kube-proxy-zzkv9"
Jan 29 12:08:33.701016 systemd[1]: Created slice kubepods-besteffort-pod0f76dd26_b28a_4510_9930_c20b2ead284c.slice - libcontainer container kubepods-besteffort-pod0f76dd26_b28a_4510_9930_c20b2ead284c.slice.
Jan 29 12:08:33.803231 kubelet[1737]: E0129 12:08:33.803189    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:33.803231 kubelet[1737]: W0129 12:08:33.803221    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:33.803377 kubelet[1737]: E0129 12:08:33.803245    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:33.810855 kubelet[1737]: E0129 12:08:33.810754    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:33.810855 kubelet[1737]: W0129 12:08:33.810785    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:33.810855 kubelet[1737]: E0129 12:08:33.810804    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:33.811464 kubelet[1737]: E0129 12:08:33.811302    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:33.811464 kubelet[1737]: W0129 12:08:33.811320    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:33.811464 kubelet[1737]: E0129 12:08:33.811350    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:33.811594 kubelet[1737]: E0129 12:08:33.811570    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:33.811594 kubelet[1737]: W0129 12:08:33.811588    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:33.811642 kubelet[1737]: E0129 12:08:33.811599    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:33.998689 kubelet[1737]: E0129 12:08:33.998266    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:33.999056 containerd[1437]: time="2025-01-29T12:08:33.999018091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zzkv9,Uid:4e60329d-9540-43b1-86d2-3e2e40617e27,Namespace:kube-system,Attempt:0,}"
Jan 29 12:08:34.004813 kubelet[1737]: E0129 12:08:34.004764    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:34.005362 containerd[1437]: time="2025-01-29T12:08:34.005118251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hxkr9,Uid:0f76dd26-b28a-4510-9930-c20b2ead284c,Namespace:calico-system,Attempt:0,}"
Jan 29 12:08:34.655916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount516558455.mount: Deactivated successfully.
Jan 29 12:08:34.661538 containerd[1437]: time="2025-01-29T12:08:34.661488291Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 12:08:34.663303 containerd[1437]: time="2025-01-29T12:08:34.663252251Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 12:08:34.663998 containerd[1437]: time="2025-01-29T12:08:34.663967731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175"
Jan 29 12:08:34.664741 containerd[1437]: time="2025-01-29T12:08:34.664695331Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 12:08:34.665580 containerd[1437]: time="2025-01-29T12:08:34.665107171Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0"
Jan 29 12:08:34.667227 containerd[1437]: time="2025-01-29T12:08:34.667174251Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}"
Jan 29 12:08:34.669581 containerd[1437]: time="2025-01-29T12:08:34.669551291Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 664.36332ms"
Jan 29 12:08:34.671026 containerd[1437]: time="2025-01-29T12:08:34.670814531Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 671.70756ms"
Jan 29 12:08:34.680470 kubelet[1737]: E0129 12:08:34.680407    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:34.770571 containerd[1437]: time="2025-01-29T12:08:34.770482371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 12:08:34.770571 containerd[1437]: time="2025-01-29T12:08:34.770550451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 12:08:34.770571 containerd[1437]: time="2025-01-29T12:08:34.770563451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:34.771059 containerd[1437]: time="2025-01-29T12:08:34.770918531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 12:08:34.771059 containerd[1437]: time="2025-01-29T12:08:34.771011131Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 12:08:34.771786 containerd[1437]: time="2025-01-29T12:08:34.771588451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:34.771786 containerd[1437]: time="2025-01-29T12:08:34.771725251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:34.772409 containerd[1437]: time="2025-01-29T12:08:34.771846531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:34.845625 systemd[1]: Started cri-containerd-66a0a416fb0f82f149e032e049bb3579feefc8d0510b92b7d8c07b5f420c61dc.scope - libcontainer container 66a0a416fb0f82f149e032e049bb3579feefc8d0510b92b7d8c07b5f420c61dc.
Jan 29 12:08:34.846917 systemd[1]: Started cri-containerd-f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5.scope - libcontainer container f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5.
Jan 29 12:08:34.868798 containerd[1437]: time="2025-01-29T12:08:34.868737891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zzkv9,Uid:4e60329d-9540-43b1-86d2-3e2e40617e27,Namespace:kube-system,Attempt:0,} returns sandbox id \"66a0a416fb0f82f149e032e049bb3579feefc8d0510b92b7d8c07b5f420c61dc\""
Jan 29 12:08:34.870039 kubelet[1737]: E0129 12:08:34.870005    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:34.871864 containerd[1437]: time="2025-01-29T12:08:34.871602611Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\""
Jan 29 12:08:34.872262 containerd[1437]: time="2025-01-29T12:08:34.872233411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hxkr9,Uid:0f76dd26-b28a-4510-9930-c20b2ead284c,Namespace:calico-system,Attempt:0,} returns sandbox id \"f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5\""
Jan 29 12:08:34.872973 kubelet[1737]: E0129 12:08:34.872926    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:35.680859 kubelet[1737]: E0129 12:08:35.680820    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:35.741743 kubelet[1737]: E0129 12:08:35.741641    1737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lwv" podUID="9851e09c-960e-4f3c-998e-b4757588d7ae"
Jan 29 12:08:36.158370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066491002.mount: Deactivated successfully.
Jan 29 12:08:36.365850 containerd[1437]: time="2025-01-29T12:08:36.365785411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:36.366455 containerd[1437]: time="2025-01-29T12:08:36.366407571Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=27364399"
Jan 29 12:08:36.367056 containerd[1437]: time="2025-01-29T12:08:36.367013571Z" level=info msg="ImageCreate event name:\"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:36.368839 containerd[1437]: time="2025-01-29T12:08:36.368808691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:36.370219 containerd[1437]: time="2025-01-29T12:08:36.370187851Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"27363416\" in 1.49854612s"
Jan 29 12:08:36.370270 containerd[1437]: time="2025-01-29T12:08:36.370223971Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e124fbed851d756107a6153db4dc52269a2fd34af3cc46f00a2ef113f868aab0\""
Jan 29 12:08:36.371456 containerd[1437]: time="2025-01-29T12:08:36.371409411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\""
Jan 29 12:08:36.372854 containerd[1437]: time="2025-01-29T12:08:36.372724771Z" level=info msg="CreateContainer within sandbox \"66a0a416fb0f82f149e032e049bb3579feefc8d0510b92b7d8c07b5f420c61dc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}"
Jan 29 12:08:36.383626 containerd[1437]: time="2025-01-29T12:08:36.383578771Z" level=info msg="CreateContainer within sandbox \"66a0a416fb0f82f149e032e049bb3579feefc8d0510b92b7d8c07b5f420c61dc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"17116a43d1d9c249348aa32cbecdc28aee6eec5697024bb2c7611d59894507be\""
Jan 29 12:08:36.384169 containerd[1437]: time="2025-01-29T12:08:36.384130851Z" level=info msg="StartContainer for \"17116a43d1d9c249348aa32cbecdc28aee6eec5697024bb2c7611d59894507be\""
Jan 29 12:08:36.411632 systemd[1]: Started cri-containerd-17116a43d1d9c249348aa32cbecdc28aee6eec5697024bb2c7611d59894507be.scope - libcontainer container 17116a43d1d9c249348aa32cbecdc28aee6eec5697024bb2c7611d59894507be.
Jan 29 12:08:36.437096 containerd[1437]: time="2025-01-29T12:08:36.437054131Z" level=info msg="StartContainer for \"17116a43d1d9c249348aa32cbecdc28aee6eec5697024bb2c7611d59894507be\" returns successfully"
Jan 29 12:08:36.681273 kubelet[1737]: E0129 12:08:36.681127    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:36.756225 kubelet[1737]: E0129 12:08:36.756177    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:36.768504 kubelet[1737]: I0129 12:08:36.767143    1737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zzkv9" podStartSLOduration=4.267212611 podStartE2EDuration="5.767124851s" podCreationTimestamp="2025-01-29 12:08:31 +0000 UTC" firstStartedPulling="2025-01-29 12:08:34.871145971 +0000 UTC m=+5.554508721" lastFinishedPulling="2025-01-29 12:08:36.371058211 +0000 UTC m=+7.054420961" observedRunningTime="2025-01-29 12:08:36.766353971 +0000 UTC m=+7.449716761" watchObservedRunningTime="2025-01-29 12:08:36.767124851 +0000 UTC m=+7.450487601"
Jan 29 12:08:36.807510 kubelet[1737]: E0129 12:08:36.807477    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.807510 kubelet[1737]: W0129 12:08:36.807499    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.807624 kubelet[1737]: E0129 12:08:36.807519    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.807782 kubelet[1737]: E0129 12:08:36.807704    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.807782 kubelet[1737]: W0129 12:08:36.807716    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.807782 kubelet[1737]: E0129 12:08:36.807750    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.807944 kubelet[1737]: E0129 12:08:36.807929    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.807944 kubelet[1737]: W0129 12:08:36.807942    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.808004 kubelet[1737]: E0129 12:08:36.807951    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.808122 kubelet[1737]: E0129 12:08:36.808107    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.808122 kubelet[1737]: W0129 12:08:36.808122    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.808176 kubelet[1737]: E0129 12:08:36.808132    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.808331 kubelet[1737]: E0129 12:08:36.808316    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.808331 kubelet[1737]: W0129 12:08:36.808329    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.808376 kubelet[1737]: E0129 12:08:36.808339    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.808555 kubelet[1737]: E0129 12:08:36.808540    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.808555 kubelet[1737]: W0129 12:08:36.808554    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.808614 kubelet[1737]: E0129 12:08:36.808563    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.808716 kubelet[1737]: E0129 12:08:36.808703    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.808716 kubelet[1737]: W0129 12:08:36.808714    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.808758 kubelet[1737]: E0129 12:08:36.808722    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.808858 kubelet[1737]: E0129 12:08:36.808848    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.808883 kubelet[1737]: W0129 12:08:36.808860    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.808883 kubelet[1737]: E0129 12:08:36.808867    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.809044 kubelet[1737]: E0129 12:08:36.809032    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.809070 kubelet[1737]: W0129 12:08:36.809044    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.809070 kubelet[1737]: E0129 12:08:36.809053    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.809267 kubelet[1737]: E0129 12:08:36.809241    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.809267 kubelet[1737]: W0129 12:08:36.809253    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.809267 kubelet[1737]: E0129 12:08:36.809261    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.809445 kubelet[1737]: E0129 12:08:36.809431    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.809445 kubelet[1737]: W0129 12:08:36.809443    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.809498 kubelet[1737]: E0129 12:08:36.809451    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.809598 kubelet[1737]: E0129 12:08:36.809587    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.809598 kubelet[1737]: W0129 12:08:36.809598    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.809648 kubelet[1737]: E0129 12:08:36.809605    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.809747 kubelet[1737]: E0129 12:08:36.809734    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.809747 kubelet[1737]: W0129 12:08:36.809746    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.809797 kubelet[1737]: E0129 12:08:36.809753    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.810087 kubelet[1737]: E0129 12:08:36.810075    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.810123 kubelet[1737]: W0129 12:08:36.810087    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.810123 kubelet[1737]: E0129 12:08:36.810097    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.810257 kubelet[1737]: E0129 12:08:36.810246    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.810257 kubelet[1737]: W0129 12:08:36.810257    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.810318 kubelet[1737]: E0129 12:08:36.810266    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.810446 kubelet[1737]: E0129 12:08:36.810400    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.810446 kubelet[1737]: W0129 12:08:36.810442    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.810499 kubelet[1737]: E0129 12:08:36.810452    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.810681 kubelet[1737]: E0129 12:08:36.810667    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.810681 kubelet[1737]: W0129 12:08:36.810680    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.810742 kubelet[1737]: E0129 12:08:36.810689    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.810847 kubelet[1737]: E0129 12:08:36.810832    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.810847 kubelet[1737]: W0129 12:08:36.810843    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.810900 kubelet[1737]: E0129 12:08:36.810851    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.810983 kubelet[1737]: E0129 12:08:36.810972    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.810983 kubelet[1737]: W0129 12:08:36.810982    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.811046 kubelet[1737]: E0129 12:08:36.810990    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.811126 kubelet[1737]: E0129 12:08:36.811114    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.811155 kubelet[1737]: W0129 12:08:36.811125    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.811155 kubelet[1737]: E0129 12:08:36.811133    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.817536 kubelet[1737]: E0129 12:08:36.817518    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.817536 kubelet[1737]: W0129 12:08:36.817533    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.817651 kubelet[1737]: E0129 12:08:36.817545    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.817759 kubelet[1737]: E0129 12:08:36.817743    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.817786 kubelet[1737]: W0129 12:08:36.817759    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.817786 kubelet[1737]: E0129 12:08:36.817771    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.817956 kubelet[1737]: E0129 12:08:36.817943    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.817956 kubelet[1737]: W0129 12:08:36.817956    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.818008 kubelet[1737]: E0129 12:08:36.817965    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.818116 kubelet[1737]: E0129 12:08:36.818104    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.818116 kubelet[1737]: W0129 12:08:36.818116    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.818161 kubelet[1737]: E0129 12:08:36.818124    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.818253 kubelet[1737]: E0129 12:08:36.818242    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.818253 kubelet[1737]: W0129 12:08:36.818253    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.818302 kubelet[1737]: E0129 12:08:36.818262    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.818443 kubelet[1737]: E0129 12:08:36.818430    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.818443 kubelet[1737]: W0129 12:08:36.818442    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.818488 kubelet[1737]: E0129 12:08:36.818450    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.818768 kubelet[1737]: E0129 12:08:36.818754    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.818768 kubelet[1737]: W0129 12:08:36.818767    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.818851 kubelet[1737]: E0129 12:08:36.818835    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.818917 kubelet[1737]: E0129 12:08:36.818906    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.818917 kubelet[1737]: W0129 12:08:36.818917    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.818961 kubelet[1737]: E0129 12:08:36.818924    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.819082 kubelet[1737]: E0129 12:08:36.819071    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.819082 kubelet[1737]: W0129 12:08:36.819081    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.819131 kubelet[1737]: E0129 12:08:36.819089    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.819235 kubelet[1737]: E0129 12:08:36.819224    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.819257 kubelet[1737]: W0129 12:08:36.819235    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.819257 kubelet[1737]: E0129 12:08:36.819242    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.819451 kubelet[1737]: E0129 12:08:36.819438    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.819451 kubelet[1737]: W0129 12:08:36.819450    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.819496 kubelet[1737]: E0129 12:08:36.819459    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:36.819756 kubelet[1737]: E0129 12:08:36.819743    1737 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input
Jan 29 12:08:36.819756 kubelet[1737]: W0129 12:08:36.819756    1737 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: ""
Jan 29 12:08:36.819803 kubelet[1737]: E0129 12:08:36.819765    1737 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input"
Jan 29 12:08:37.261824 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3274160544.mount: Deactivated successfully.
Jan 29 12:08:37.315711 containerd[1437]: time="2025-01-29T12:08:37.315660051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:37.316190 containerd[1437]: time="2025-01-29T12:08:37.316154011Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6487603"
Jan 29 12:08:37.316977 containerd[1437]: time="2025-01-29T12:08:37.316948491Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:37.318986 containerd[1437]: time="2025-01-29T12:08:37.318954611Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:37.319659 containerd[1437]: time="2025-01-29T12:08:37.319615131Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 948.1274ms"
Jan 29 12:08:37.319688 containerd[1437]: time="2025-01-29T12:08:37.319660971Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\""
Jan 29 12:08:37.321408 containerd[1437]: time="2025-01-29T12:08:37.321373731Z" level=info msg="CreateContainer within sandbox \"f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}"
Jan 29 12:08:37.330831 containerd[1437]: time="2025-01-29T12:08:37.330783251Z" level=info msg="CreateContainer within sandbox \"f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"30fe46a09aa31840896af0e843632210b62c29c58619e73f97c3b8067aeb026d\""
Jan 29 12:08:37.331230 containerd[1437]: time="2025-01-29T12:08:37.331188131Z" level=info msg="StartContainer for \"30fe46a09aa31840896af0e843632210b62c29c58619e73f97c3b8067aeb026d\""
Jan 29 12:08:37.357580 systemd[1]: Started cri-containerd-30fe46a09aa31840896af0e843632210b62c29c58619e73f97c3b8067aeb026d.scope - libcontainer container 30fe46a09aa31840896af0e843632210b62c29c58619e73f97c3b8067aeb026d.
Jan 29 12:08:37.383927 containerd[1437]: time="2025-01-29T12:08:37.380881171Z" level=info msg="StartContainer for \"30fe46a09aa31840896af0e843632210b62c29c58619e73f97c3b8067aeb026d\" returns successfully"
Jan 29 12:08:37.418964 systemd[1]: cri-containerd-30fe46a09aa31840896af0e843632210b62c29c58619e73f97c3b8067aeb026d.scope: Deactivated successfully.
Jan 29 12:08:37.484848 containerd[1437]: time="2025-01-29T12:08:37.484790491Z" level=info msg="shim disconnected" id=30fe46a09aa31840896af0e843632210b62c29c58619e73f97c3b8067aeb026d namespace=k8s.io
Jan 29 12:08:37.485204 containerd[1437]: time="2025-01-29T12:08:37.485044771Z" level=warning msg="cleaning up after shim disconnected" id=30fe46a09aa31840896af0e843632210b62c29c58619e73f97c3b8067aeb026d namespace=k8s.io
Jan 29 12:08:37.485204 containerd[1437]: time="2025-01-29T12:08:37.485061851Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 12:08:37.681509 kubelet[1737]: E0129 12:08:37.681466    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:37.741459 kubelet[1737]: E0129 12:08:37.741105    1737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lwv" podUID="9851e09c-960e-4f3c-998e-b4757588d7ae"
Jan 29 12:08:37.758920 kubelet[1737]: E0129 12:08:37.758888    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:37.759060 kubelet[1737]: E0129 12:08:37.758961    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:37.759849 containerd[1437]: time="2025-01-29T12:08:37.759820171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\""
Jan 29 12:08:38.682536 kubelet[1737]: E0129 12:08:38.682486    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:39.592165 containerd[1437]: time="2025-01-29T12:08:39.592113931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:39.592672 containerd[1437]: time="2025-01-29T12:08:39.592628291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123"
Jan 29 12:08:39.593477 containerd[1437]: time="2025-01-29T12:08:39.593444691Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:39.595978 containerd[1437]: time="2025-01-29T12:08:39.595939411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:39.596468 containerd[1437]: time="2025-01-29T12:08:39.596439331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 1.83656512s"
Jan 29 12:08:39.596513 containerd[1437]: time="2025-01-29T12:08:39.596467891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\""
Jan 29 12:08:39.598528 containerd[1437]: time="2025-01-29T12:08:39.598497171Z" level=info msg="CreateContainer within sandbox \"f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}"
Jan 29 12:08:39.610009 containerd[1437]: time="2025-01-29T12:08:39.609967771Z" level=info msg="CreateContainer within sandbox \"f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862\""
Jan 29 12:08:39.610387 containerd[1437]: time="2025-01-29T12:08:39.610364811Z" level=info msg="StartContainer for \"0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862\""
Jan 29 12:08:39.635580 systemd[1]: Started cri-containerd-0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862.scope - libcontainer container 0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862.
Jan 29 12:08:39.658813 containerd[1437]: time="2025-01-29T12:08:39.658767011Z" level=info msg="StartContainer for \"0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862\" returns successfully"
Jan 29 12:08:39.683311 kubelet[1737]: E0129 12:08:39.683275    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:39.742635 kubelet[1737]: E0129 12:08:39.741851    1737 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w8lwv" podUID="9851e09c-960e-4f3c-998e-b4757588d7ae"
Jan 29 12:08:39.764192 kubelet[1737]: E0129 12:08:39.763973    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:40.098061 containerd[1437]: time="2025-01-29T12:08:40.097915251Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE         \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config"
Jan 29 12:08:40.099562 systemd[1]: cri-containerd-0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862.scope: Deactivated successfully.
Jan 29 12:08:40.117897 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862-rootfs.mount: Deactivated successfully.
Jan 29 12:08:40.192971 kubelet[1737]: I0129 12:08:40.192065    1737 kubelet_node_status.go:502] "Fast updating node status as it just became ready"
Jan 29 12:08:40.331924 containerd[1437]: time="2025-01-29T12:08:40.331853371Z" level=info msg="shim disconnected" id=0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862 namespace=k8s.io
Jan 29 12:08:40.331924 containerd[1437]: time="2025-01-29T12:08:40.331912291Z" level=warning msg="cleaning up after shim disconnected" id=0fef2ff5bdbc773c6d49e98aea344c7b402b9567160f49770690b2adf201a862 namespace=k8s.io
Jan 29 12:08:40.331924 containerd[1437]: time="2025-01-29T12:08:40.331920771Z" level=info msg="cleaning up dead shim" namespace=k8s.io
Jan 29 12:08:40.684227 kubelet[1737]: E0129 12:08:40.684178    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:40.767361 kubelet[1737]: E0129 12:08:40.767256    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:40.768612 containerd[1437]: time="2025-01-29T12:08:40.768085011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\""
Jan 29 12:08:41.684576 kubelet[1737]: E0129 12:08:41.684525    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:41.754318 systemd[1]: Created slice kubepods-besteffort-pod9851e09c_960e_4f3c_998e_b4757588d7ae.slice - libcontainer container kubepods-besteffort-pod9851e09c_960e_4f3c_998e_b4757588d7ae.slice.
Jan 29 12:08:41.769220 containerd[1437]: time="2025-01-29T12:08:41.769165051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w8lwv,Uid:9851e09c-960e-4f3c-998e-b4757588d7ae,Namespace:calico-system,Attempt:0,}"
Jan 29 12:08:41.908564 containerd[1437]: time="2025-01-29T12:08:41.908519051Z" level=error msg="Failed to destroy network for sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:41.908927 containerd[1437]: time="2025-01-29T12:08:41.908887851Z" level=error msg="encountered an error cleaning up failed sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:41.908973 containerd[1437]: time="2025-01-29T12:08:41.908942971Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w8lwv,Uid:9851e09c-960e-4f3c-998e-b4757588d7ae,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:41.909175 kubelet[1737]: E0129 12:08:41.909137    1737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:41.909582 kubelet[1737]: E0129 12:08:41.909349    1737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w8lwv"
Jan 29 12:08:41.909582 kubelet[1737]: E0129 12:08:41.909389    1737 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w8lwv"
Jan 29 12:08:41.909582 kubelet[1737]: E0129 12:08:41.909446    1737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w8lwv_calico-system(9851e09c-960e-4f3c-998e-b4757588d7ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w8lwv_calico-system(9851e09c-960e-4f3c-998e-b4757588d7ae)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w8lwv" podUID="9851e09c-960e-4f3c-998e-b4757588d7ae"
Jan 29 12:08:41.910176 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f-shm.mount: Deactivated successfully.
Jan 29 12:08:42.684883 kubelet[1737]: E0129 12:08:42.684826    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:42.777034 kubelet[1737]: I0129 12:08:42.776492    1737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f"
Jan 29 12:08:42.777492 containerd[1437]: time="2025-01-29T12:08:42.777449891Z" level=info msg="StopPodSandbox for \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\""
Jan 29 12:08:42.777759 containerd[1437]: time="2025-01-29T12:08:42.777607811Z" level=info msg="Ensure that sandbox 39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f in task-service has been cleanup successfully"
Jan 29 12:08:42.808702 containerd[1437]: time="2025-01-29T12:08:42.808558211Z" level=error msg="StopPodSandbox for \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\" failed" error="failed to destroy network for sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:42.808947 kubelet[1737]: E0129 12:08:42.808901    1737 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f"
Jan 29 12:08:42.809007 kubelet[1737]: E0129 12:08:42.808962    1737 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f"}
Jan 29 12:08:42.809007 kubelet[1737]: E0129 12:08:42.809016    1737 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9851e09c-960e-4f3c-998e-b4757588d7ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\""
Jan 29 12:08:42.809146 kubelet[1737]: E0129 12:08:42.809040    1737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9851e09c-960e-4f3c-998e-b4757588d7ae\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w8lwv" podUID="9851e09c-960e-4f3c-998e-b4757588d7ae"
Jan 29 12:08:43.476846 systemd[1]: Created slice kubepods-besteffort-podd9eac917_859e_4d14_9706_541b34d8975d.slice - libcontainer container kubepods-besteffort-podd9eac917_859e_4d14_9706_541b34d8975d.slice.
Jan 29 12:08:43.483936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2707623259.mount: Deactivated successfully.
Jan 29 12:08:43.663438 kubelet[1737]: I0129 12:08:43.663376    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x65bm\" (UniqueName: \"kubernetes.io/projected/d9eac917-859e-4d14-9706-541b34d8975d-kube-api-access-x65bm\") pod \"nginx-deployment-7fcdb87857-jnv8b\" (UID: \"d9eac917-859e-4d14-9706-541b34d8975d\") " pod="default/nginx-deployment-7fcdb87857-jnv8b"
Jan 29 12:08:43.685883 kubelet[1737]: E0129 12:08:43.685856    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:43.735473 containerd[1437]: time="2025-01-29T12:08:43.735348371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:43.736321 containerd[1437]: time="2025-01-29T12:08:43.736120051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762"
Jan 29 12:08:43.737111 containerd[1437]: time="2025-01-29T12:08:43.737052171Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:43.738851 containerd[1437]: time="2025-01-29T12:08:43.738802331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:43.739738 containerd[1437]: time="2025-01-29T12:08:43.739591091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 2.97146316s"
Jan 29 12:08:43.739738 containerd[1437]: time="2025-01-29T12:08:43.739629651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\""
Jan 29 12:08:43.748712 containerd[1437]: time="2025-01-29T12:08:43.748677171Z" level=info msg="CreateContainer within sandbox \"f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}"
Jan 29 12:08:43.760941 containerd[1437]: time="2025-01-29T12:08:43.760905051Z" level=info msg="CreateContainer within sandbox \"f131ab3cd552e37b22f69001928e8e7f39779849dae9cf790130e2545414c1e5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"314b931a51917713fbe18962e653d7f2ba954e274658cc3d5e414f846e77233c\""
Jan 29 12:08:43.761407 containerd[1437]: time="2025-01-29T12:08:43.761337651Z" level=info msg="StartContainer for \"314b931a51917713fbe18962e653d7f2ba954e274658cc3d5e414f846e77233c\""
Jan 29 12:08:43.779191 containerd[1437]: time="2025-01-29T12:08:43.779141571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-jnv8b,Uid:d9eac917-859e-4d14-9706-541b34d8975d,Namespace:default,Attempt:0,}"
Jan 29 12:08:43.782594 systemd[1]: Started cri-containerd-314b931a51917713fbe18962e653d7f2ba954e274658cc3d5e414f846e77233c.scope - libcontainer container 314b931a51917713fbe18962e653d7f2ba954e274658cc3d5e414f846e77233c.
Jan 29 12:08:43.807082 containerd[1437]: time="2025-01-29T12:08:43.807035731Z" level=info msg="StartContainer for \"314b931a51917713fbe18962e653d7f2ba954e274658cc3d5e414f846e77233c\" returns successfully"
Jan 29 12:08:43.840912 containerd[1437]: time="2025-01-29T12:08:43.840853771Z" level=error msg="Failed to destroy network for sandbox \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:43.841229 containerd[1437]: time="2025-01-29T12:08:43.841186411Z" level=error msg="encountered an error cleaning up failed sandbox \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:43.841278 containerd[1437]: time="2025-01-29T12:08:43.841255611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-jnv8b,Uid:d9eac917-859e-4d14-9706-541b34d8975d,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:43.841507 kubelet[1737]: E0129 12:08:43.841474    1737 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/"
Jan 29 12:08:43.841880 kubelet[1737]: E0129 12:08:43.841617    1737 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-jnv8b"
Jan 29 12:08:43.841880 kubelet[1737]: E0129 12:08:43.841643    1737 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-jnv8b"
Jan 29 12:08:43.841880 kubelet[1737]: E0129 12:08:43.841685    1737 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-jnv8b_default(d9eac917-859e-4d14-9706-541b34d8975d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-jnv8b_default(d9eac917-859e-4d14-9706-541b34d8975d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-jnv8b" podUID="d9eac917-859e-4d14-9706-541b34d8975d"
Jan 29 12:08:43.941899 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
Jan 29 12:08:43.942117 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
Jan 29 12:08:44.686168 kubelet[1737]: E0129 12:08:44.686099    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:44.784355 kubelet[1737]: E0129 12:08:44.784075    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:44.785022 kubelet[1737]: I0129 12:08:44.784965    1737 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4"
Jan 29 12:08:44.785408 containerd[1437]: time="2025-01-29T12:08:44.785369931Z" level=info msg="StopPodSandbox for \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\""
Jan 29 12:08:44.785656 containerd[1437]: time="2025-01-29T12:08:44.785532011Z" level=info msg="Ensure that sandbox 813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4 in task-service has been cleanup successfully"
Jan 29 12:08:44.798302 kubelet[1737]: I0129 12:08:44.798236    1737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hxkr9" podStartSLOduration=4.931271171 podStartE2EDuration="13.798217891s" podCreationTimestamp="2025-01-29 12:08:31 +0000 UTC" firstStartedPulling="2025-01-29 12:08:34.873375931 +0000 UTC m=+5.556738681" lastFinishedPulling="2025-01-29 12:08:43.740322651 +0000 UTC m=+14.423685401" observedRunningTime="2025-01-29 12:08:44.798037371 +0000 UTC m=+15.481400121" watchObservedRunningTime="2025-01-29 12:08:44.798217891 +0000 UTC m=+15.481580641"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.826 [INFO][2416] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.826 [INFO][2416] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" iface="eth0" netns="/var/run/netns/cni-3e4706d7-4fc9-d890-7f36-ebac1e81e2e3"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.826 [INFO][2416] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" iface="eth0" netns="/var/run/netns/cni-3e4706d7-4fc9-d890-7f36-ebac1e81e2e3"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.827 [INFO][2416] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" iface="eth0" netns="/var/run/netns/cni-3e4706d7-4fc9-d890-7f36-ebac1e81e2e3"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.827 [INFO][2416] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.827 [INFO][2416] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.873 [INFO][2424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" HandleID="k8s-pod-network.813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" Workload="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.873 [INFO][2424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.873 [INFO][2424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.882 [WARNING][2424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" HandleID="k8s-pod-network.813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" Workload="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.882 [INFO][2424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" HandleID="k8s-pod-network.813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4" Workload="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.883 [INFO][2424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 29 12:08:44.886655 containerd[1437]: 2025-01-29 12:08:44.885 [INFO][2416] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4"
Jan 29 12:08:44.887141 containerd[1437]: time="2025-01-29T12:08:44.886806771Z" level=info msg="TearDown network for sandbox \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\" successfully"
Jan 29 12:08:44.887141 containerd[1437]: time="2025-01-29T12:08:44.886834291Z" level=info msg="StopPodSandbox for \"813b72ff7db4b2ae2b5fae1561aa996905edbe18e8daa3bacccc15b8e4d89ef4\" returns successfully"
Jan 29 12:08:44.887622 containerd[1437]: time="2025-01-29T12:08:44.887595731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-jnv8b,Uid:d9eac917-859e-4d14-9706-541b34d8975d,Namespace:default,Attempt:1,}"
Jan 29 12:08:44.888455 systemd[1]: run-netns-cni\x2d3e4706d7\x2d4fc9\x2dd890\x2d7f36\x2debac1e81e2e3.mount: Deactivated successfully.
Jan 29 12:08:44.990930 systemd-networkd[1378]: cali28e6b847ac7: Link UP
Jan 29 12:08:44.991830 systemd-networkd[1378]: cali28e6b847ac7: Gained carrier
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.916 [INFO][2432] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.928 [INFO][2432] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0 nginx-deployment-7fcdb87857- default  d9eac917-859e-4d14-9706-541b34d8975d 984 0 2025-01-29 12:08:43 +0000 UTC <nil> <nil> map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  10.0.0.106  nginx-deployment-7fcdb87857-jnv8b eth0 default [] []   [kns.default ksa.default.default] cali28e6b847ac7  [] []}} ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Namespace="default" Pod="nginx-deployment-7fcdb87857-jnv8b" WorkloadEndpoint="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.928 [INFO][2432] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Namespace="default" Pod="nginx-deployment-7fcdb87857-jnv8b" WorkloadEndpoint="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.951 [INFO][2445] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" HandleID="k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Workload="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.961 [INFO][2445] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" HandleID="k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Workload="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e7d30), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.106", "pod":"nginx-deployment-7fcdb87857-jnv8b", "timestamp":"2025-01-29 12:08:44.951174011 +0000 UTC"}, Hostname:"10.0.0.106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.962 [INFO][2445] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.962 [INFO][2445] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.962 [INFO][2445] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.106'
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.964 [INFO][2445] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.967 [INFO][2445] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.971 [INFO][2445] ipam/ipam.go 489: Trying affinity for 192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.973 [INFO][2445] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.975 [INFO][2445] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.975 [INFO][2445] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.976 [INFO][2445] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.980 [INFO][2445] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.984 [INFO][2445] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.193/26] block=192.168.103.192/26 handle="k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.984 [INFO][2445] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.193/26] handle="k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" host="10.0.0.106"
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.984 [INFO][2445] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 29 12:08:44.998987 containerd[1437]: 2025-01-29 12:08:44.984 [INFO][2445] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.193/26] IPv6=[] ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" HandleID="k8s-pod-network.e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Workload="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.999634 containerd[1437]: 2025-01-29 12:08:44.986 [INFO][2432] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Namespace="default" Pod="nginx-deployment-7fcdb87857-jnv8b" WorkloadEndpoint="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"d9eac917-859e-4d14-9706-541b34d8975d", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 8, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.106", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-jnv8b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali28e6b847ac7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 29 12:08:44.999634 containerd[1437]: 2025-01-29 12:08:44.986 [INFO][2432] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.193/32] ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Namespace="default" Pod="nginx-deployment-7fcdb87857-jnv8b" WorkloadEndpoint="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.999634 containerd[1437]: 2025-01-29 12:08:44.986 [INFO][2432] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28e6b847ac7 ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Namespace="default" Pod="nginx-deployment-7fcdb87857-jnv8b" WorkloadEndpoint="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.999634 containerd[1437]: 2025-01-29 12:08:44.991 [INFO][2432] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Namespace="default" Pod="nginx-deployment-7fcdb87857-jnv8b" WorkloadEndpoint="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:44.999634 containerd[1437]: 2025-01-29 12:08:44.991 [INFO][2432] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Namespace="default" Pod="nginx-deployment-7fcdb87857-jnv8b" WorkloadEndpoint="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"d9eac917-859e-4d14-9706-541b34d8975d", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 8, 43, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.106", ContainerID:"e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced", Pod:"nginx-deployment-7fcdb87857-jnv8b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali28e6b847ac7", MAC:"d2:a4:29:34:a7:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 29 12:08:44.999634 containerd[1437]: 2025-01-29 12:08:44.997 [INFO][2432] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced" Namespace="default" Pod="nginx-deployment-7fcdb87857-jnv8b" WorkloadEndpoint="10.0.0.106-k8s-nginx--deployment--7fcdb87857--jnv8b-eth0"
Jan 29 12:08:45.016236 containerd[1437]: time="2025-01-29T12:08:45.016130011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 12:08:45.016236 containerd[1437]: time="2025-01-29T12:08:45.016188411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 12:08:45.016236 containerd[1437]: time="2025-01-29T12:08:45.016211131Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:45.016372 containerd[1437]: time="2025-01-29T12:08:45.016289691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:45.036628 systemd[1]: Started cri-containerd-e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced.scope - libcontainer container e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced.
Jan 29 12:08:45.046980 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 12:08:45.061372 containerd[1437]: time="2025-01-29T12:08:45.061283331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-jnv8b,Uid:d9eac917-859e-4d14-9706-541b34d8975d,Namespace:default,Attempt:1,} returns sandbox id \"e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced\""
Jan 29 12:08:45.062354 containerd[1437]: time="2025-01-29T12:08:45.062319371Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 29 12:08:45.279449 kernel: bpftool[2634]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
Jan 29 12:08:45.417600 systemd-networkd[1378]: vxlan.calico: Link UP
Jan 29 12:08:45.417606 systemd-networkd[1378]: vxlan.calico: Gained carrier
Jan 29 12:08:45.686802 kubelet[1737]: E0129 12:08:45.686670    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:45.787702 kubelet[1737]: I0129 12:08:45.787675    1737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 29 12:08:45.788038 kubelet[1737]: E0129 12:08:45.788023    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:46.480643 systemd-networkd[1378]: cali28e6b847ac7: Gained IPv6LL
Jan 29 12:08:46.687664 kubelet[1737]: E0129 12:08:46.687479    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:46.777998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2055818232.mount: Deactivated successfully.
Jan 29 12:08:46.991553 systemd-networkd[1378]: vxlan.calico: Gained IPv6LL
Jan 29 12:08:47.672967 containerd[1437]: time="2025-01-29T12:08:47.672911931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:47.674098 containerd[1437]: time="2025-01-29T12:08:47.673922491Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=67680490"
Jan 29 12:08:47.674837 containerd[1437]: time="2025-01-29T12:08:47.674798931Z" level=info msg="ImageCreate event name:\"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:47.677767 containerd[1437]: time="2025-01-29T12:08:47.677728571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:47.678894 containerd[1437]: time="2025-01-29T12:08:47.678771611Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 2.61641556s"
Jan 29 12:08:47.678894 containerd[1437]: time="2025-01-29T12:08:47.678807491Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\""
Jan 29 12:08:47.681251 containerd[1437]: time="2025-01-29T12:08:47.681178851Z" level=info msg="CreateContainer within sandbox \"e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced\" for container &ContainerMetadata{Name:nginx,Attempt:0,}"
Jan 29 12:08:47.688649 kubelet[1737]: E0129 12:08:47.688613    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:47.692996 containerd[1437]: time="2025-01-29T12:08:47.692889211Z" level=info msg="CreateContainer within sandbox \"e396770be603c67ea147860f1a13eeeff06d783fb1b7a41b0838b86818e6cced\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f4a7348e7ca6094e6968f10d969979181574743c78922a10385605d088a54b54\""
Jan 29 12:08:47.693279 containerd[1437]: time="2025-01-29T12:08:47.693247051Z" level=info msg="StartContainer for \"f4a7348e7ca6094e6968f10d969979181574743c78922a10385605d088a54b54\""
Jan 29 12:08:47.787563 systemd[1]: Started cri-containerd-f4a7348e7ca6094e6968f10d969979181574743c78922a10385605d088a54b54.scope - libcontainer container f4a7348e7ca6094e6968f10d969979181574743c78922a10385605d088a54b54.
Jan 29 12:08:47.823744 containerd[1437]: time="2025-01-29T12:08:47.823697251Z" level=info msg="StartContainer for \"f4a7348e7ca6094e6968f10d969979181574743c78922a10385605d088a54b54\" returns successfully"
Jan 29 12:08:48.689721 kubelet[1737]: E0129 12:08:48.689674    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:48.808755 kubelet[1737]: I0129 12:08:48.808701    1737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-jnv8b" podStartSLOduration=3.190743651 podStartE2EDuration="5.808684411s" podCreationTimestamp="2025-01-29 12:08:43 +0000 UTC" firstStartedPulling="2025-01-29 12:08:45.062046931 +0000 UTC m=+15.745409681" lastFinishedPulling="2025-01-29 12:08:47.679987691 +0000 UTC m=+18.363350441" observedRunningTime="2025-01-29 12:08:48.808649651 +0000 UTC m=+19.492012401" watchObservedRunningTime="2025-01-29 12:08:48.808684411 +0000 UTC m=+19.492047161"
Jan 29 12:08:49.690067 kubelet[1737]: E0129 12:08:49.690020    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:50.026633 systemd[1]: Created slice kubepods-besteffort-podd9a9d27e_722f_48b1_9da6_a62ac465abee.slice - libcontainer container kubepods-besteffort-podd9a9d27e_722f_48b1_9da6_a62ac465abee.slice.
Jan 29 12:08:50.202234 kubelet[1737]: I0129 12:08:50.202187    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/d9a9d27e-722f-48b1-9da6-a62ac465abee-data\") pod \"nfs-server-provisioner-0\" (UID: \"d9a9d27e-722f-48b1-9da6-a62ac465abee\") " pod="default/nfs-server-provisioner-0"
Jan 29 12:08:50.202234 kubelet[1737]: I0129 12:08:50.202240    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7dnh\" (UniqueName: \"kubernetes.io/projected/d9a9d27e-722f-48b1-9da6-a62ac465abee-kube-api-access-j7dnh\") pod \"nfs-server-provisioner-0\" (UID: \"d9a9d27e-722f-48b1-9da6-a62ac465abee\") " pod="default/nfs-server-provisioner-0"
Jan 29 12:08:50.630208 containerd[1437]: time="2025-01-29T12:08:50.630157331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d9a9d27e-722f-48b1-9da6-a62ac465abee,Namespace:default,Attempt:0,}"
Jan 29 12:08:50.690772 kubelet[1737]: E0129 12:08:50.690721    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:50.745704 systemd-networkd[1378]: cali60e51b789ff: Link UP
Jan 29 12:08:50.745898 systemd-networkd[1378]: cali60e51b789ff: Gained carrier
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.679 [INFO][2813] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.106-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default  d9a9d27e-722f-48b1-9da6-a62ac465abee 1031 0 2025-01-29 12:08:50 +0000 UTC <nil> <nil> map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s  10.0.0.106  nfs-server-provisioner-0 eth0 nfs-server-provisioner [] []   [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff  [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.106-k8s-nfs--server--provisioner--0-"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.679 [INFO][2813] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.106-k8s-nfs--server--provisioner--0-eth0"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.701 [INFO][2827] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" HandleID="k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Workload="10.0.0.106-k8s-nfs--server--provisioner--0-eth0"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.712 [INFO][2827] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" HandleID="k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Workload="10.0.0.106-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aaf20), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.106", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 12:08:50.701729291 +0000 UTC"}, Hostname:"10.0.0.106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.712 [INFO][2827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.712 [INFO][2827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.712 [INFO][2827] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.106'
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.715 [INFO][2827] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.718 [INFO][2827] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.724 [INFO][2827] ipam/ipam.go 489: Trying affinity for 192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.726 [INFO][2827] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.730 [INFO][2827] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.730 [INFO][2827] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.731 [INFO][2827] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.735 [INFO][2827] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.741 [INFO][2827] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.194/26] block=192.168.103.192/26 handle="k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.741 [INFO][2827] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.194/26] handle="k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" host="10.0.0.106"
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.741 [INFO][2827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 29 12:08:50.756310 containerd[1437]: 2025-01-29 12:08:50.741 [INFO][2827] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.194/26] IPv6=[] ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" HandleID="k8s-pod-network.d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Workload="10.0.0.106-k8s-nfs--server--provisioner--0-eth0"
Jan 29 12:08:50.756864 containerd[1437]: 2025-01-29 12:08:50.743 [INFO][2813] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.106-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.106-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d9a9d27e-722f-48b1-9da6-a62ac465abee", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 8, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.106", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 29 12:08:50.756864 containerd[1437]: 2025-01-29 12:08:50.743 [INFO][2813] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.194/32] ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.106-k8s-nfs--server--provisioner--0-eth0"
Jan 29 12:08:50.756864 containerd[1437]: 2025-01-29 12:08:50.743 [INFO][2813] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.106-k8s-nfs--server--provisioner--0-eth0"
Jan 29 12:08:50.756864 containerd[1437]: 2025-01-29 12:08:50.746 [INFO][2813] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.106-k8s-nfs--server--provisioner--0-eth0"
Jan 29 12:08:50.757078 containerd[1437]: 2025-01-29 12:08:50.746 [INFO][2813] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.106-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.106-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"d9a9d27e-722f-48b1-9da6-a62ac465abee", ResourceVersion:"1031", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 8, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.106", ContainerID:"d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"5a:10:94:dc:6c:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 29 12:08:50.757078 containerd[1437]: 2025-01-29 12:08:50.754 [INFO][2813] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.106-k8s-nfs--server--provisioner--0-eth0"
Jan 29 12:08:50.778755 containerd[1437]: time="2025-01-29T12:08:50.778637491Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 12:08:50.778755 containerd[1437]: time="2025-01-29T12:08:50.778701811Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 12:08:50.778755 containerd[1437]: time="2025-01-29T12:08:50.778717451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:50.778910 containerd[1437]: time="2025-01-29T12:08:50.778787891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:50.803578 systemd[1]: Started cri-containerd-d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464.scope - libcontainer container d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464.
Jan 29 12:08:50.811999 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 12:08:50.858744 containerd[1437]: time="2025-01-29T12:08:50.858699371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:d9a9d27e-722f-48b1-9da6-a62ac465abee,Namespace:default,Attempt:0,} returns sandbox id \"d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464\""
Jan 29 12:08:50.860172 containerd[1437]: time="2025-01-29T12:08:50.860147371Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\""
Jan 29 12:08:51.330131 systemd[1]: run-containerd-runc-k8s.io-d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464-runc.WMTF9P.mount: Deactivated successfully.
Jan 29 12:08:51.678389 kubelet[1737]: E0129 12:08:51.678339    1737 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:51.691885 kubelet[1737]: E0129 12:08:51.691843    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:51.919595 systemd-networkd[1378]: cali60e51b789ff: Gained IPv6LL
Jan 29 12:08:52.693196 kubelet[1737]: E0129 12:08:52.693015    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:53.109675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495248977.mount: Deactivated successfully.
Jan 29 12:08:53.693770 kubelet[1737]: E0129 12:08:53.693657    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:54.422353 containerd[1437]: time="2025-01-29T12:08:54.422302676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:54.423367 containerd[1437]: time="2025-01-29T12:08:54.423322975Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625"
Jan 29 12:08:54.424144 containerd[1437]: time="2025-01-29T12:08:54.424089829Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:54.426695 containerd[1437]: time="2025-01-29T12:08:54.426665477Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:54.427952 containerd[1437]: time="2025-01-29T12:08:54.427910220Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 3.567727529s"
Jan 29 12:08:54.428017 containerd[1437]: time="2025-01-29T12:08:54.427951061Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\""
Jan 29 12:08:54.429994 containerd[1437]: time="2025-01-29T12:08:54.429959778Z" level=info msg="CreateContainer within sandbox \"d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}"
Jan 29 12:08:54.440011 containerd[1437]: time="2025-01-29T12:08:54.439962722Z" level=info msg="CreateContainer within sandbox \"d0e70a68e4c338b069d790a09865e7711aa95b4e3d31382062b8781bb124a464\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"16992594b7638af6bff2dff115d45a751236c63b5d8562f9dba65b892d4488e4\""
Jan 29 12:08:54.440575 containerd[1437]: time="2025-01-29T12:08:54.440544133Z" level=info msg="StartContainer for \"16992594b7638af6bff2dff115d45a751236c63b5d8562f9dba65b892d4488e4\""
Jan 29 12:08:54.467573 systemd[1]: Started cri-containerd-16992594b7638af6bff2dff115d45a751236c63b5d8562f9dba65b892d4488e4.scope - libcontainer container 16992594b7638af6bff2dff115d45a751236c63b5d8562f9dba65b892d4488e4.
Jan 29 12:08:54.506704 containerd[1437]: time="2025-01-29T12:08:54.504712357Z" level=info msg="StartContainer for \"16992594b7638af6bff2dff115d45a751236c63b5d8562f9dba65b892d4488e4\" returns successfully"
Jan 29 12:08:54.695589 kubelet[1737]: E0129 12:08:54.695484    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:55.695775 kubelet[1737]: E0129 12:08:55.695732    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:56.089849 kubelet[1737]: I0129 12:08:56.089531    1737 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
Jan 29 12:08:56.089955 kubelet[1737]: E0129 12:08:56.089931    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:56.244957 kubelet[1737]: I0129 12:08:56.244900    1737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.675913193 podStartE2EDuration="6.244882458s" podCreationTimestamp="2025-01-29 12:08:50 +0000 UTC" firstStartedPulling="2025-01-29 12:08:50.859840411 +0000 UTC m=+21.543203161" lastFinishedPulling="2025-01-29 12:08:54.428809676 +0000 UTC m=+25.112172426" observedRunningTime="2025-01-29 12:08:54.825564958 +0000 UTC m=+25.508927708" watchObservedRunningTime="2025-01-29 12:08:56.244882458 +0000 UTC m=+26.928245168"
Jan 29 12:08:56.696874 kubelet[1737]: E0129 12:08:56.696838    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:56.816463 kubelet[1737]: E0129 12:08:56.816439    1737 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"
Jan 29 12:08:57.697760 kubelet[1737]: E0129 12:08:57.697718    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:57.742236 containerd[1437]: time="2025-01-29T12:08:57.741994592Z" level=info msg="StopPodSandbox for \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\""
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.786 [INFO][3061] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.786 [INFO][3061] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" iface="eth0" netns="/var/run/netns/cni-e8eb1ee8-4a9d-eaa5-629c-bb3f1e4cc53a"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.786 [INFO][3061] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" iface="eth0" netns="/var/run/netns/cni-e8eb1ee8-4a9d-eaa5-629c-bb3f1e4cc53a"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.787 [INFO][3061] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone.  Nothing to do. ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" iface="eth0" netns="/var/run/netns/cni-e8eb1ee8-4a9d-eaa5-629c-bb3f1e4cc53a"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.787 [INFO][3061] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.787 [INFO][3061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.803 [INFO][3069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" HandleID="k8s-pod-network.39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" Workload="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.803 [INFO][3069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.803 [INFO][3069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.811 [WARNING][3069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" HandleID="k8s-pod-network.39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" Workload="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.811 [INFO][3069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" HandleID="k8s-pod-network.39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f" Workload="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.813 [INFO][3069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 29 12:08:57.815620 containerd[1437]: 2025-01-29 12:08:57.814 [INFO][3061] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f"
Jan 29 12:08:57.815999 containerd[1437]: time="2025-01-29T12:08:57.815750033Z" level=info msg="TearDown network for sandbox \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\" successfully"
Jan 29 12:08:57.815999 containerd[1437]: time="2025-01-29T12:08:57.815777393Z" level=info msg="StopPodSandbox for \"39580900ef4fb7cbdf98b056838d728af25e1ec2bb37ad667a0f912ad214f31f\" returns successfully"
Jan 29 12:08:57.816367 containerd[1437]: time="2025-01-29T12:08:57.816343402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w8lwv,Uid:9851e09c-960e-4f3c-998e-b4757588d7ae,Namespace:calico-system,Attempt:1,}"
Jan 29 12:08:57.817890 systemd[1]: run-netns-cni\x2de8eb1ee8\x2d4a9d\x2deaa5\x2d629c\x2dbb3f1e4cc53a.mount: Deactivated successfully.
Jan 29 12:08:57.989943 systemd-networkd[1378]: cali482a36b72a2: Link UP
Jan 29 12:08:57.990086 systemd-networkd[1378]: cali482a36b72a2: Gained carrier
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.920 [INFO][3080] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.106-k8s-csi--node--driver--w8lwv-eth0 csi-node-driver- calico-system  9851e09c-960e-4f3c-998e-b4757588d7ae 1075 0 2025-01-29 12:08:31 +0000 UTC <nil> <nil> map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s  10.0.0.106  csi-node-driver-w8lwv eth0 csi-node-driver [] []   [kns.calico-system ksa.calico-system.csi-node-driver] cali482a36b72a2  [] []}} ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Namespace="calico-system" Pod="csi-node-driver-w8lwv" WorkloadEndpoint="10.0.0.106-k8s-csi--node--driver--w8lwv-"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.921 [INFO][3080] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Namespace="calico-system" Pod="csi-node-driver-w8lwv" WorkloadEndpoint="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.943 [INFO][3089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" HandleID="k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Workload="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.955 [INFO][3089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" HandleID="k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Workload="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d9330), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.106", "pod":"csi-node-driver-w8lwv", "timestamp":"2025-01-29 12:08:57.94380886 +0000 UTC"}, Hostname:"10.0.0.106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.955 [INFO][3089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.955 [INFO][3089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.955 [INFO][3089] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.106'
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.957 [INFO][3089] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.961 [INFO][3089] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.966 [INFO][3089] ipam/ipam.go 489: Trying affinity for 192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.968 [INFO][3089] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.972 [INFO][3089] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.972 [INFO][3089] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.973 [INFO][3089] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.978 [INFO][3089] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.982 [INFO][3089] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.195/26] block=192.168.103.192/26 handle="k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.982 [INFO][3089] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.195/26] handle="k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" host="10.0.0.106"
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.983 [INFO][3089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 29 12:08:58.001470 containerd[1437]: 2025-01-29 12:08:57.983 [INFO][3089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.195/26] IPv6=[] ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" HandleID="k8s-pod-network.f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Workload="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:58.002092 containerd[1437]: 2025-01-29 12:08:57.984 [INFO][3080] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Namespace="calico-system" Pod="csi-node-driver-w8lwv" WorkloadEndpoint="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.106-k8s-csi--node--driver--w8lwv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9851e09c-960e-4f3c-998e-b4757588d7ae", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 8, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.106", ContainerID:"", Pod:"csi-node-driver-w8lwv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali482a36b72a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 29 12:08:58.002092 containerd[1437]: 2025-01-29 12:08:57.984 [INFO][3080] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.195/32] ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Namespace="calico-system" Pod="csi-node-driver-w8lwv" WorkloadEndpoint="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:58.002092 containerd[1437]: 2025-01-29 12:08:57.985 [INFO][3080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali482a36b72a2 ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Namespace="calico-system" Pod="csi-node-driver-w8lwv" WorkloadEndpoint="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:58.002092 containerd[1437]: 2025-01-29 12:08:57.990 [INFO][3080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Namespace="calico-system" Pod="csi-node-driver-w8lwv" WorkloadEndpoint="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:58.002092 containerd[1437]: 2025-01-29 12:08:57.990 [INFO][3080] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Namespace="calico-system" Pod="csi-node-driver-w8lwv" WorkloadEndpoint="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.106-k8s-csi--node--driver--w8lwv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9851e09c-960e-4f3c-998e-b4757588d7ae", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 8, 31, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.106", ContainerID:"f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe", Pod:"csi-node-driver-w8lwv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali482a36b72a2", MAC:"aa:f0:5a:6e:a1:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 29 12:08:58.002092 containerd[1437]: 2025-01-29 12:08:57.998 [INFO][3080] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe" Namespace="calico-system" Pod="csi-node-driver-w8lwv" WorkloadEndpoint="10.0.0.106-k8s-csi--node--driver--w8lwv-eth0"
Jan 29 12:08:58.018480 containerd[1437]: time="2025-01-29T12:08:58.018380818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 12:08:58.018480 containerd[1437]: time="2025-01-29T12:08:58.018446379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 12:08:58.018635 containerd[1437]: time="2025-01-29T12:08:58.018463299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:58.018729 containerd[1437]: time="2025-01-29T12:08:58.018538060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:08:58.052579 systemd[1]: Started cri-containerd-f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe.scope - libcontainer container f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe.
Jan 29 12:08:58.061309 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 12:08:58.073689 containerd[1437]: time="2025-01-29T12:08:58.073652366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w8lwv,Uid:9851e09c-960e-4f3c-998e-b4757588d7ae,Namespace:calico-system,Attempt:1,} returns sandbox id \"f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe\""
Jan 29 12:08:58.075099 containerd[1437]: time="2025-01-29T12:08:58.075068146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\""
Jan 29 12:08:58.698037 kubelet[1737]: E0129 12:08:58.697984    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:58.858892 containerd[1437]: time="2025-01-29T12:08:58.858846558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:58.859429 containerd[1437]: time="2025-01-29T12:08:58.859393206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730"
Jan 29 12:08:58.860190 containerd[1437]: time="2025-01-29T12:08:58.860155137Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:58.862070 containerd[1437]: time="2025-01-29T12:08:58.862011243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:58.862928 containerd[1437]: time="2025-01-29T12:08:58.862813975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 787.706709ms"
Jan 29 12:08:58.862928 containerd[1437]: time="2025-01-29T12:08:58.862845495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\""
Jan 29 12:08:58.864724 containerd[1437]: time="2025-01-29T12:08:58.864581840Z" level=info msg="CreateContainer within sandbox \"f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}"
Jan 29 12:08:58.876147 containerd[1437]: time="2025-01-29T12:08:58.876105204Z" level=info msg="CreateContainer within sandbox \"f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"776d241d378139c60375020f6e6f2bc5695a84285de138d97f7cbf2cb97846bf\""
Jan 29 12:08:58.876760 containerd[1437]: time="2025-01-29T12:08:58.876715693Z" level=info msg="StartContainer for \"776d241d378139c60375020f6e6f2bc5695a84285de138d97f7cbf2cb97846bf\""
Jan 29 12:08:58.912576 systemd[1]: Started cri-containerd-776d241d378139c60375020f6e6f2bc5695a84285de138d97f7cbf2cb97846bf.scope - libcontainer container 776d241d378139c60375020f6e6f2bc5695a84285de138d97f7cbf2cb97846bf.
Jan 29 12:08:58.937958 containerd[1437]: time="2025-01-29T12:08:58.937918685Z" level=info msg="StartContainer for \"776d241d378139c60375020f6e6f2bc5695a84285de138d97f7cbf2cb97846bf\" returns successfully"
Jan 29 12:08:58.939050 containerd[1437]: time="2025-01-29T12:08:58.939020941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\""
Jan 29 12:08:59.343530 systemd-networkd[1378]: cali482a36b72a2: Gained IPv6LL
Jan 29 12:08:59.700045 kubelet[1737]: E0129 12:08:59.699553    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:08:59.833663 containerd[1437]: time="2025-01-29T12:08:59.832870823Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:59.833663 containerd[1437]: time="2025-01-29T12:08:59.833631113Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368"
Jan 29 12:08:59.834465 containerd[1437]: time="2025-01-29T12:08:59.834404124Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:59.836780 containerd[1437]: time="2025-01-29T12:08:59.836601233Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:08:59.837455 containerd[1437]: time="2025-01-29T12:08:59.837401044Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 898.340582ms"
Jan 29 12:08:59.837455 containerd[1437]: time="2025-01-29T12:08:59.837481965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\""
Jan 29 12:08:59.839715 containerd[1437]: time="2025-01-29T12:08:59.839654794Z" level=info msg="CreateContainer within sandbox \"f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}"
Jan 29 12:08:59.852668 containerd[1437]: time="2025-01-29T12:08:59.852629167Z" level=info msg="CreateContainer within sandbox \"f2114f1f059ad31af0ad11f5700f66386e45b8e6e0eb6126584dab5c73299afe\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d134f2e0b42022c0c530a4bf62f773f105767db439e4e8cde3a9f9ef198a39ff\""
Jan 29 12:08:59.853168 containerd[1437]: time="2025-01-29T12:08:59.853121094Z" level=info msg="StartContainer for \"d134f2e0b42022c0c530a4bf62f773f105767db439e4e8cde3a9f9ef198a39ff\""
Jan 29 12:08:59.898584 systemd[1]: Started cri-containerd-d134f2e0b42022c0c530a4bf62f773f105767db439e4e8cde3a9f9ef198a39ff.scope - libcontainer container d134f2e0b42022c0c530a4bf62f773f105767db439e4e8cde3a9f9ef198a39ff.
Jan 29 12:08:59.937268 containerd[1437]: time="2025-01-29T12:08:59.937182257Z" level=info msg="StartContainer for \"d134f2e0b42022c0c530a4bf62f773f105767db439e4e8cde3a9f9ef198a39ff\" returns successfully"
Jan 29 12:09:00.699980 kubelet[1737]: E0129 12:09:00.699935    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:09:00.765071 kubelet[1737]: I0129 12:09:00.765025    1737 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0
Jan 29 12:09:00.765071 kubelet[1737]: I0129 12:09:00.765070    1737 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock
Jan 29 12:09:00.843930 kubelet[1737]: I0129 12:09:00.843872    1737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w8lwv" podStartSLOduration=28.080394597 podStartE2EDuration="29.843857551s" podCreationTimestamp="2025-01-29 12:08:31 +0000 UTC" firstStartedPulling="2025-01-29 12:08:58.074813502 +0000 UTC m=+28.758176212" lastFinishedPulling="2025-01-29 12:08:59.838276456 +0000 UTC m=+30.521639166" observedRunningTime="2025-01-29 12:09:00.842755138 +0000 UTC m=+31.526117848" watchObservedRunningTime="2025-01-29 12:09:00.843857551 +0000 UTC m=+31.527220261"
Jan 29 12:09:01.701116 kubelet[1737]: E0129 12:09:01.701076    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:09:02.701513 kubelet[1737]: E0129 12:09:02.701472    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:09:03.701843 kubelet[1737]: E0129 12:09:03.701793    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:09:04.589099 systemd[1]: Created slice kubepods-besteffort-podd4458379_57c9_45c6_b417_d7ac42fcd414.slice - libcontainer container kubepods-besteffort-podd4458379_57c9_45c6_b417_d7ac42fcd414.slice.
Jan 29 12:09:04.702735 kubelet[1737]: E0129 12:09:04.702668    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:09:04.768329 kubelet[1737]: I0129 12:09:04.768293    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-26dea70f-6312-407b-8dba-996ec622380b\" (UniqueName: \"kubernetes.io/nfs/d4458379-57c9-45c6-b417-d7ac42fcd414-pvc-26dea70f-6312-407b-8dba-996ec622380b\") pod \"test-pod-1\" (UID: \"d4458379-57c9-45c6-b417-d7ac42fcd414\") " pod="default/test-pod-1"
Jan 29 12:09:04.768329 kubelet[1737]: I0129 12:09:04.768333    1737 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxn2w\" (UniqueName: \"kubernetes.io/projected/d4458379-57c9-45c6-b417-d7ac42fcd414-kube-api-access-zxn2w\") pod \"test-pod-1\" (UID: \"d4458379-57c9-45c6-b417-d7ac42fcd414\") " pod="default/test-pod-1"
Jan 29 12:09:04.890450 kernel: FS-Cache: Loaded
Jan 29 12:09:04.916678 kernel: RPC: Registered named UNIX socket transport module.
Jan 29 12:09:04.916744 kernel: RPC: Registered udp transport module.
Jan 29 12:09:04.916762 kernel: RPC: Registered tcp transport module.
Jan 29 12:09:04.917573 kernel: RPC: Registered tcp-with-tls transport module.
Jan 29 12:09:04.918481 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 29 12:09:05.093830 kernel: NFS: Registering the id_resolver key type
Jan 29 12:09:05.094028 kernel: Key type id_resolver registered
Jan 29 12:09:05.094059 kernel: Key type id_legacy registered
Jan 29 12:09:05.115358 nfsidmap[3259]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Jan 29 12:09:05.120589 nfsidmap[3262]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain'
Jan 29 12:09:05.192507 containerd[1437]: time="2025-01-29T12:09:05.192398465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d4458379-57c9-45c6-b417-d7ac42fcd414,Namespace:default,Attempt:0,}"
Jan 29 12:09:05.293556 systemd-networkd[1378]: cali5ec59c6bf6e: Link UP
Jan 29 12:09:05.294031 systemd-networkd[1378]: cali5ec59c6bf6e: Gained carrier
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.234 [INFO][3265] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.106-k8s-test--pod--1-eth0  default  d4458379-57c9-45c6-b417-d7ac42fcd414 1140 0 2025-01-29 12:08:50 +0000 UTC <nil> <nil> map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s  10.0.0.106  test-pod-1 eth0 default [] []   [kns.default ksa.default.default] cali5ec59c6bf6e  [] []}} ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.106-k8s-test--pod--1-"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.234 [INFO][3265] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.106-k8s-test--pod--1-eth0"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.256 [INFO][3278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" HandleID="k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Workload="10.0.0.106-k8s-test--pod--1-eth0"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.267 [INFO][3278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" HandleID="k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Workload="10.0.0.106-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027ad20), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.106", "pod":"test-pod-1", "timestamp":"2025-01-29 12:09:05.256151524 +0000 UTC"}, Hostname:"10.0.0.106", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"}
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.267 [INFO][3278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock.
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.267 [INFO][3278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock.
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.267 [INFO][3278] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.106'
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.269 [INFO][3278] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.272 [INFO][3278] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.276 [INFO][3278] ipam/ipam.go 489: Trying affinity for 192.168.103.192/26 host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.278 [INFO][3278] ipam/ipam.go 155: Attempting to load block cidr=192.168.103.192/26 host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.279 [INFO][3278] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.280 [INFO][3278] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.281 [INFO][3278] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.284 [INFO][3278] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.290 [INFO][3278] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.103.196/26] block=192.168.103.192/26 handle="k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.290 [INFO][3278] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.103.196/26] handle="k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" host="10.0.0.106"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.290 [INFO][3278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock.
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.290 [INFO][3278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.196/26] IPv6=[] ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" HandleID="k8s-pod-network.726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Workload="10.0.0.106-k8s-test--pod--1-eth0"
Jan 29 12:09:05.301768 containerd[1437]: 2025-01-29 12:09:05.291 [INFO][3265] cni-plugin/k8s.go 386: Populated endpoint ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.106-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.106-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d4458379-57c9-45c6-b417-d7ac42fcd414", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 8, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.106", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 29 12:09:05.302633 containerd[1437]: 2025-01-29 12:09:05.291 [INFO][3265] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.103.196/32] ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.106-k8s-test--pod--1-eth0"
Jan 29 12:09:05.302633 containerd[1437]: 2025-01-29 12:09:05.291 [INFO][3265] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.106-k8s-test--pod--1-eth0"
Jan 29 12:09:05.302633 containerd[1437]: 2025-01-29 12:09:05.294 [INFO][3265] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.106-k8s-test--pod--1-eth0"
Jan 29 12:09:05.302633 containerd[1437]: 2025-01-29 12:09:05.294 [INFO][3265] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.106-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.106-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"d4458379-57c9-45c6-b417-d7ac42fcd414", ResourceVersion:"1140", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 8, 50, 0, time.Local), DeletionTimestamp:<nil>, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.106", ContainerID:"726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"22:d3:e2:26:bb:52", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}}
Jan 29 12:09:05.302633 containerd[1437]: 2025-01-29 12:09:05.300 [INFO][3265] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.106-k8s-test--pod--1-eth0"
Jan 29 12:09:05.320046 containerd[1437]: time="2025-01-29T12:09:05.319947582Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
Jan 29 12:09:05.320046 containerd[1437]: time="2025-01-29T12:09:05.320009383Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
Jan 29 12:09:05.320046 containerd[1437]: time="2025-01-29T12:09:05.320023343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:09:05.320255 containerd[1437]: time="2025-01-29T12:09:05.320096784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
Jan 29 12:09:05.344557 systemd[1]: Started cri-containerd-726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e.scope - libcontainer container 726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e.
Jan 29 12:09:05.354019 systemd-resolved[1307]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address
Jan 29 12:09:05.368246 containerd[1437]: time="2025-01-29T12:09:05.368182300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d4458379-57c9-45c6-b417-d7ac42fcd414,Namespace:default,Attempt:0,} returns sandbox id \"726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e\""
Jan 29 12:09:05.369145 containerd[1437]: time="2025-01-29T12:09:05.369074348Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\""
Jan 29 12:09:05.610481 containerd[1437]: time="2025-01-29T12:09:05.609978614Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
Jan 29 12:09:05.610612 containerd[1437]: time="2025-01-29T12:09:05.610545579Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61"
Jan 29 12:09:05.613731 containerd[1437]: time="2025-01-29T12:09:05.613681648Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"67680368\" in 244.566899ms"
Jan 29 12:09:05.613731 containerd[1437]: time="2025-01-29T12:09:05.613723368Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:24e054abc3d1f73f3d72f6d30f9f1f63a4b4a2d920cd71b830c844925b3770a2\""
Jan 29 12:09:05.615750 containerd[1437]: time="2025-01-29T12:09:05.615721706Z" level=info msg="CreateContainer within sandbox \"726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e\" for container &ContainerMetadata{Name:test,Attempt:0,}"
Jan 29 12:09:05.616913 update_engine[1423]: I20250129 12:09:05.616863  1423 update_attempter.cc:509] Updating boot flags...
Jan 29 12:09:05.628640 containerd[1437]: time="2025-01-29T12:09:05.628602863Z" level=info msg="CreateContainer within sandbox \"726326fe87e3f71ed1fe52681bc258ff167278923bbdb595b9798e325c0acd5e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"de6973422bf1e38b0cc08236a2ea881a56835eedf157f4a6ef3e8b99a02a0dc4\""
Jan 29 12:09:05.629089 containerd[1437]: time="2025-01-29T12:09:05.629052627Z" level=info msg="StartContainer for \"de6973422bf1e38b0cc08236a2ea881a56835eedf157f4a6ef3e8b99a02a0dc4\""
Jan 29 12:09:05.649462 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3254)
Jan 29 12:09:05.678571 systemd[1]: Started cri-containerd-de6973422bf1e38b0cc08236a2ea881a56835eedf157f4a6ef3e8b99a02a0dc4.scope - libcontainer container de6973422bf1e38b0cc08236a2ea881a56835eedf157f4a6ef3e8b99a02a0dc4.
Jan 29 12:09:05.699967 containerd[1437]: time="2025-01-29T12:09:05.699926390Z" level=info msg="StartContainer for \"de6973422bf1e38b0cc08236a2ea881a56835eedf157f4a6ef3e8b99a02a0dc4\" returns successfully"
Jan 29 12:09:05.703593 kubelet[1737]: E0129 12:09:05.703502    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:09:05.850152 kubelet[1737]: I0129 12:09:05.850010    1737 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.604316082 podStartE2EDuration="15.849978191s" podCreationTimestamp="2025-01-29 12:08:50 +0000 UTC" firstStartedPulling="2025-01-29 12:09:05.368826226 +0000 UTC m=+36.052188976" lastFinishedPulling="2025-01-29 12:09:05.614488335 +0000 UTC m=+36.297851085" observedRunningTime="2025-01-29 12:09:05.849980111 +0000 UTC m=+36.533342901" watchObservedRunningTime="2025-01-29 12:09:05.849978191 +0000 UTC m=+36.533340941"
Jan 29 12:09:06.703797 kubelet[1737]: E0129 12:09:06.703746    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"
Jan 29 12:09:07.023591 systemd-networkd[1378]: cali5ec59c6bf6e: Gained IPv6LL
Jan 29 12:09:07.704189 kubelet[1737]: E0129 12:09:07.704101    1737 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"