May 13 00:33:02.915317 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 13 00:33:02.915350 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Mon May 12 22:51:32 -00 2025 May 13 00:33:02.915360 kernel: KASLR enabled May 13 00:33:02.915366 kernel: efi: EFI v2.7 by EDK II May 13 00:33:02.915372 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 May 13 00:33:02.915378 kernel: random: crng init done May 13 00:33:02.915385 kernel: ACPI: Early table checksum verification disabled May 13 00:33:02.915391 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) May 13 00:33:02.915397 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) May 13 00:33:02.915405 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915412 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915417 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915424 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915430 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915438 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915446 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915452 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915459 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 13 00:33:02.915466 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 13 00:33:02.915472 kernel: NUMA: Failed to initialise from firmware May 13 00:33:02.915479 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:33:02.915485 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 13 00:33:02.915492 kernel: Zone ranges: May 13 00:33:02.915498 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:33:02.915504 kernel: DMA32 empty May 13 00:33:02.915512 kernel: Normal empty May 13 00:33:02.915518 kernel: Movable zone start for each node May 13 00:33:02.915524 kernel: Early memory node ranges May 13 00:33:02.915530 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 13 00:33:02.915537 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 13 00:33:02.915543 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 13 00:33:02.915549 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 13 00:33:02.915556 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 13 00:33:02.915562 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 13 00:33:02.915568 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 13 00:33:02.915574 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 13 00:33:02.915581 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 13 00:33:02.915588 kernel: psci: probing for conduit method from ACPI. May 13 00:33:02.915595 kernel: psci: PSCIv1.1 detected in firmware. May 13 00:33:02.915601 kernel: psci: Using standard PSCI v0.2 function IDs May 13 00:33:02.915610 kernel: psci: Trusted OS migration not required May 13 00:33:02.915617 kernel: psci: SMC Calling Convention v1.1 May 13 00:33:02.915624 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 13 00:33:02.915632 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 13 00:33:02.915639 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 13 00:33:02.915646 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 13 00:33:02.915652 kernel: Detected PIPT I-cache on CPU0 May 13 00:33:02.915659 kernel: CPU features: detected: GIC system register CPU interface May 13 00:33:02.915666 kernel: CPU features: detected: Hardware dirty bit management May 13 00:33:02.915672 kernel: CPU features: detected: Spectre-v4 May 13 00:33:02.915679 kernel: CPU features: detected: Spectre-BHB May 13 00:33:02.915686 kernel: CPU features: kernel page table isolation forced ON by KASLR May 13 00:33:02.915693 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 13 00:33:02.915700 kernel: CPU features: detected: ARM erratum 1418040 May 13 00:33:02.915707 kernel: CPU features: detected: SSBS not fully self-synchronizing May 13 00:33:02.915714 kernel: alternatives: applying boot alternatives May 13 00:33:02.915722 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:33:02.915729 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 00:33:02.915736 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 00:33:02.915743 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 13 00:33:02.915749 kernel: Fallback order for Node 0: 0 May 13 00:33:02.915756 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 13 00:33:02.915762 kernel: Policy zone: DMA May 13 00:33:02.915769 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 00:33:02.915777 kernel: software IO TLB: area num 4. May 13 00:33:02.915784 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 13 00:33:02.915791 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8104K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) May 13 00:33:02.915798 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 13 00:33:02.915805 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 00:33:02.915812 kernel: rcu: RCU event tracing is enabled. May 13 00:33:02.915819 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 13 00:33:02.915826 kernel: Trampoline variant of Tasks RCU enabled. May 13 00:33:02.915832 kernel: Tracing variant of Tasks RCU enabled. May 13 00:33:02.915839 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 00:33:02.915846 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 13 00:33:02.915853 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 13 00:33:02.915861 kernel: GICv3: 256 SPIs implemented May 13 00:33:02.915867 kernel: GICv3: 0 Extended SPIs implemented May 13 00:33:02.915874 kernel: Root IRQ handler: gic_handle_irq May 13 00:33:02.915881 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 13 00:33:02.915887 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 13 00:33:02.915894 kernel: ITS [mem 0x08080000-0x0809ffff] May 13 00:33:02.915901 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 13 00:33:02.915908 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 13 00:33:02.915915 kernel: GICv3: using LPI property table @0x00000000400f0000 May 13 00:33:02.915921 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 13 00:33:02.915928 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 00:33:02.915936 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:33:02.915943 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 13 00:33:02.915950 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 13 00:33:02.915957 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 13 00:33:02.915964 kernel: arm-pv: using stolen time PV May 13 00:33:02.915971 kernel: Console: colour dummy device 80x25 May 13 00:33:02.915978 kernel: ACPI: Core revision 20230628 May 13 00:33:02.915985 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 13 00:33:02.915992 kernel: pid_max: default: 32768 minimum: 301 May 13 00:33:02.915999 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 00:33:02.916007 kernel: landlock: Up and running. May 13 00:33:02.916014 kernel: SELinux: Initializing. May 13 00:33:02.916020 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:33:02.916027 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 13 00:33:02.916034 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 13 00:33:02.916042 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:33:02.916049 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 13 00:33:02.916055 kernel: rcu: Hierarchical SRCU implementation. May 13 00:33:02.916063 kernel: rcu: Max phase no-delay instances is 400. May 13 00:33:02.916071 kernel: Platform MSI: ITS@0x8080000 domain created May 13 00:33:02.916077 kernel: PCI/MSI: ITS@0x8080000 domain created May 13 00:33:02.916084 kernel: Remapping and enabling EFI services. May 13 00:33:02.916091 kernel: smp: Bringing up secondary CPUs ... May 13 00:33:02.916102 kernel: Detected PIPT I-cache on CPU1 May 13 00:33:02.916109 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 13 00:33:02.916116 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 13 00:33:02.916123 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:33:02.916130 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 13 00:33:02.916138 kernel: Detected PIPT I-cache on CPU2 May 13 00:33:02.916145 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 13 00:33:02.916152 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 13 00:33:02.916164 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:33:02.916172 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 13 00:33:02.916179 kernel: Detected PIPT I-cache on CPU3 May 13 00:33:02.916187 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 13 00:33:02.916194 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 13 00:33:02.916201 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 13 00:33:02.916208 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 13 00:33:02.916216 kernel: smp: Brought up 1 node, 4 CPUs May 13 00:33:02.916225 kernel: SMP: Total of 4 processors activated. May 13 00:33:02.916232 kernel: CPU features: detected: 32-bit EL0 Support May 13 00:33:02.916239 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 13 00:33:02.916246 kernel: CPU features: detected: Common not Private translations May 13 00:33:02.916253 kernel: CPU features: detected: CRC32 instructions May 13 00:33:02.916261 kernel: CPU features: detected: Enhanced Virtualization Traps May 13 00:33:02.916268 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 13 00:33:02.916283 kernel: CPU features: detected: LSE atomic instructions May 13 00:33:02.916291 kernel: CPU features: detected: Privileged Access Never May 13 00:33:02.916298 kernel: CPU features: detected: RAS Extension Support May 13 00:33:02.916305 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 13 00:33:02.916312 kernel: CPU: All CPU(s) started at EL1 May 13 00:33:02.916320 kernel: alternatives: applying system-wide alternatives May 13 00:33:02.916344 kernel: devtmpfs: initialized May 13 00:33:02.916352 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 00:33:02.916359 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 13 00:33:02.916369 kernel: pinctrl core: initialized pinctrl subsystem May 13 00:33:02.916377 kernel: SMBIOS 3.0.0 present. May 13 00:33:02.916384 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 May 13 00:33:02.916392 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 00:33:02.916399 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 13 00:33:02.916406 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 13 00:33:02.916414 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 13 00:33:02.916421 kernel: audit: initializing netlink subsys (disabled) May 13 00:33:02.916429 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 May 13 00:33:02.916438 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 00:33:02.916445 kernel: cpuidle: using governor menu May 13 00:33:02.916452 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 13 00:33:02.916459 kernel: ASID allocator initialised with 32768 entries May 13 00:33:02.916467 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 00:33:02.916474 kernel: Serial: AMBA PL011 UART driver May 13 00:33:02.916481 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 13 00:33:02.916488 kernel: Modules: 0 pages in range for non-PLT usage May 13 00:33:02.916495 kernel: Modules: 509008 pages in range for PLT usage May 13 00:33:02.916504 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 00:33:02.916511 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 13 00:33:02.916518 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 13 00:33:02.916526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 13 00:33:02.916533 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 00:33:02.916540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 13 00:33:02.916548 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 13 00:33:02.916555 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 13 00:33:02.916562 kernel: ACPI: Added _OSI(Module Device) May 13 00:33:02.916571 kernel: ACPI: Added _OSI(Processor Device) May 13 00:33:02.916578 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 00:33:02.916586 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 00:33:02.916593 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 00:33:02.916600 kernel: ACPI: Interpreter enabled May 13 00:33:02.916608 kernel: ACPI: Using GIC for interrupt routing May 13 00:33:02.916615 kernel: ACPI: MCFG table detected, 1 entries May 13 00:33:02.916622 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 13 00:33:02.916629 kernel: printk: console [ttyAMA0] enabled May 13 00:33:02.916641 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 13 00:33:02.916802 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 13 00:33:02.916880 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 13 00:33:02.916946 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 13 00:33:02.917010 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 13 00:33:02.917078 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 13 00:33:02.917088 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 13 00:33:02.917098 kernel: PCI host bridge to bus 0000:00 May 13 00:33:02.917168 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 13 00:33:02.917226 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 13 00:33:02.917294 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 13 00:33:02.917375 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 13 00:33:02.917457 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 13 00:33:02.917534 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 13 00:33:02.917622 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 13 00:33:02.917689 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 13 00:33:02.917755 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:33:02.917821 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 13 00:33:02.917888 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 13 00:33:02.917956 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 13 00:33:02.918018 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 13 00:33:02.918075 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 13 00:33:02.918133 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 13 00:33:02.918143 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 13 00:33:02.918151 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 13 00:33:02.918158 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 13 00:33:02.918166 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 13 00:33:02.918173 kernel: iommu: Default domain type: Translated May 13 00:33:02.918182 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 13 00:33:02.918190 kernel: efivars: Registered efivars operations May 13 00:33:02.918197 kernel: vgaarb: loaded May 13 00:33:02.918204 kernel: clocksource: Switched to clocksource arch_sys_counter May 13 00:33:02.918212 kernel: VFS: Disk quotas dquot_6.6.0 May 13 00:33:02.918219 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 00:33:02.918226 kernel: pnp: PnP ACPI init May 13 00:33:02.918311 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 13 00:33:02.918385 kernel: pnp: PnP ACPI: found 1 devices May 13 00:33:02.918398 kernel: NET: Registered PF_INET protocol family May 13 00:33:02.918406 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 00:33:02.918413 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 13 00:33:02.918421 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 00:33:02.918428 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 13 00:33:02.918436 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 13 00:33:02.918443 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 13 00:33:02.918451 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:33:02.918459 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 13 00:33:02.918467 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 00:33:02.918474 kernel: PCI: CLS 0 bytes, default 64 May 13 00:33:02.918481 kernel: kvm [1]: HYP mode not available May 13 00:33:02.918489 kernel: Initialise system trusted keyrings May 13 00:33:02.918496 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 13 00:33:02.918503 kernel: Key type asymmetric registered May 13 00:33:02.918511 kernel: Asymmetric key parser 'x509' registered May 13 00:33:02.918518 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 13 00:33:02.918526 kernel: io scheduler mq-deadline registered May 13 00:33:02.918534 kernel: io scheduler kyber registered May 13 00:33:02.918541 kernel: io scheduler bfq registered May 13 00:33:02.918549 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 13 00:33:02.918556 kernel: ACPI: button: Power Button [PWRB] May 13 00:33:02.918563 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 13 00:33:02.918642 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 13 00:33:02.918652 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 00:33:02.918660 kernel: thunder_xcv, ver 1.0 May 13 00:33:02.918667 kernel: thunder_bgx, ver 1.0 May 13 00:33:02.918676 kernel: nicpf, ver 1.0 May 13 00:33:02.918683 kernel: nicvf, ver 1.0 May 13 00:33:02.918756 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 13 00:33:02.918817 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-13T00:33:02 UTC (1747096382) May 13 00:33:02.918827 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 00:33:02.918834 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 13 00:33:02.918842 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 13 00:33:02.918849 kernel: watchdog: Hard watchdog permanently disabled May 13 00:33:02.918859 kernel: NET: Registered PF_INET6 protocol family May 13 00:33:02.918866 kernel: Segment Routing with IPv6 May 13 00:33:02.918873 kernel: In-situ OAM (IOAM) with IPv6 May 13 00:33:02.918881 kernel: NET: Registered PF_PACKET protocol family May 13 00:33:02.918888 kernel: Key type dns_resolver registered May 13 00:33:02.918895 kernel: registered taskstats version 1 May 13 00:33:02.918902 kernel: Loading compiled-in X.509 certificates May 13 00:33:02.918910 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: ce22d51a4ec909274ada9cb7da7d7cb78db539c6' May 13 00:33:02.918917 kernel: Key type .fscrypt registered May 13 00:33:02.918926 kernel: Key type fscrypt-provisioning registered May 13 00:33:02.918933 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 00:33:02.918940 kernel: ima: Allocated hash algorithm: sha1 May 13 00:33:02.918947 kernel: ima: No architecture policies found May 13 00:33:02.918955 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 13 00:33:02.918962 kernel: clk: Disabling unused clocks May 13 00:33:02.918969 kernel: Freeing unused kernel memory: 39424K May 13 00:33:02.918976 kernel: Run /init as init process May 13 00:33:02.918983 kernel: with arguments: May 13 00:33:02.918992 kernel: /init May 13 00:33:02.918999 kernel: with environment: May 13 00:33:02.919006 kernel: HOME=/ May 13 00:33:02.919013 kernel: TERM=linux May 13 00:33:02.919020 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 00:33:02.919030 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:33:02.919039 systemd[1]: Detected virtualization kvm. May 13 00:33:02.919049 systemd[1]: Detected architecture arm64. May 13 00:33:02.919056 systemd[1]: Running in initrd. May 13 00:33:02.919064 systemd[1]: No hostname configured, using default hostname. May 13 00:33:02.919072 systemd[1]: Hostname set to . May 13 00:33:02.919080 systemd[1]: Initializing machine ID from VM UUID. May 13 00:33:02.919087 systemd[1]: Queued start job for default target initrd.target. May 13 00:33:02.919095 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:33:02.919103 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:33:02.919113 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 00:33:02.919121 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:33:02.919129 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 00:33:02.919137 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 00:33:02.919146 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 00:33:02.919155 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 00:33:02.919163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:33:02.919172 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:33:02.919180 systemd[1]: Reached target paths.target - Path Units. May 13 00:33:02.919188 systemd[1]: Reached target slices.target - Slice Units. May 13 00:33:02.919196 systemd[1]: Reached target swap.target - Swaps. May 13 00:33:02.919204 systemd[1]: Reached target timers.target - Timer Units. May 13 00:33:02.919211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:33:02.919219 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:33:02.919227 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:33:02.919235 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:33:02.919244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:33:02.919252 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:33:02.919260 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:33:02.919268 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:33:02.919285 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 00:33:02.919293 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:33:02.919301 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 00:33:02.919309 systemd[1]: Starting systemd-fsck-usr.service... May 13 00:33:02.919319 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:33:02.919335 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:33:02.919343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:33:02.919351 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 00:33:02.919359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:33:02.919367 systemd[1]: Finished systemd-fsck-usr.service. May 13 00:33:02.919377 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:33:02.919385 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:33:02.919412 systemd-journald[238]: Collecting audit messages is disabled. May 13 00:33:02.919433 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:33:02.919441 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:33:02.919450 systemd-journald[238]: Journal started May 13 00:33:02.919468 systemd-journald[238]: Runtime Journal (/run/log/journal/d0ea4fcbd0f34e0c998e5a98b7e78d4f) is 5.9M, max 47.3M, 41.4M free. May 13 00:33:02.910797 systemd-modules-load[239]: Inserted module 'overlay' May 13 00:33:02.923093 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:33:02.929346 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 00:33:02.926579 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:33:02.929844 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:33:02.933548 kernel: Bridge firewalling registered May 13 00:33:02.931188 systemd-modules-load[239]: Inserted module 'br_netfilter' May 13 00:33:02.934354 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:33:02.937554 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:33:02.941344 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:33:02.945128 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:33:02.947481 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:33:02.950021 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 00:33:02.951202 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:33:02.955935 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:33:02.964291 dracut-cmdline[275]: dracut-dracut-053 May 13 00:33:02.966888 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c683f9f6a9915f3c14a7bce5c93750f29fcd5cf6eb0774e11e882c5681cc19c0 May 13 00:33:02.986639 systemd-resolved[277]: Positive Trust Anchors: May 13 00:33:02.986657 systemd-resolved[277]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:33:02.986690 systemd-resolved[277]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:33:02.991563 systemd-resolved[277]: Defaulting to hostname 'linux'. May 13 00:33:02.992795 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:33:02.996019 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:33:03.033367 kernel: SCSI subsystem initialized May 13 00:33:03.037340 kernel: Loading iSCSI transport class v2.0-870. May 13 00:33:03.046351 kernel: iscsi: registered transport (tcp) May 13 00:33:03.057673 kernel: iscsi: registered transport (qla4xxx) May 13 00:33:03.057698 kernel: QLogic iSCSI HBA Driver May 13 00:33:03.099347 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 00:33:03.114466 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 00:33:03.133356 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 00:33:03.133414 kernel: device-mapper: uevent: version 1.0.3 May 13 00:33:03.134568 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 00:33:03.180394 kernel: raid6: neonx8 gen() 15766 MB/s May 13 00:33:03.197354 kernel: raid6: neonx4 gen() 15556 MB/s May 13 00:33:03.214349 kernel: raid6: neonx2 gen() 13220 MB/s May 13 00:33:03.231346 kernel: raid6: neonx1 gen() 10476 MB/s May 13 00:33:03.248346 kernel: raid6: int64x8 gen() 6953 MB/s May 13 00:33:03.265344 kernel: raid6: int64x4 gen() 7324 MB/s May 13 00:33:03.282345 kernel: raid6: int64x2 gen() 6105 MB/s May 13 00:33:03.299435 kernel: raid6: int64x1 gen() 5047 MB/s May 13 00:33:03.299453 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s May 13 00:33:03.317413 kernel: raid6: .... xor() 11907 MB/s, rmw enabled May 13 00:33:03.317427 kernel: raid6: using neon recovery algorithm May 13 00:33:03.322345 kernel: xor: measuring software checksum speed May 13 00:33:03.323653 kernel: 8regs : 17567 MB/sec May 13 00:33:03.323665 kernel: 32regs : 19641 MB/sec May 13 00:33:03.324897 kernel: arm64_neon : 26936 MB/sec May 13 00:33:03.324920 kernel: xor: using function: arm64_neon (26936 MB/sec) May 13 00:33:03.377355 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 00:33:03.388395 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 00:33:03.399467 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:33:03.410667 systemd-udevd[460]: Using default interface naming scheme 'v255'. May 13 00:33:03.413931 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:33:03.416672 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 00:33:03.432308 dracut-pre-trigger[472]: rd.md=0: removing MD RAID activation May 13 00:33:03.458102 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:33:03.469577 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:33:03.508473 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:33:03.519580 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 00:33:03.533659 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 00:33:03.535193 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:33:03.539438 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:33:03.542191 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:33:03.551522 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 00:33:03.558341 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 13 00:33:03.564055 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 13 00:33:03.565290 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 00:33:03.572948 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 13 00:33:03.572973 kernel: GPT:9289727 != 19775487 May 13 00:33:03.572984 kernel: GPT:Alternate GPT header not at the end of the disk. May 13 00:33:03.572994 kernel: GPT:9289727 != 19775487 May 13 00:33:03.573011 kernel: GPT: Use GNU Parted to correct GPT errors. May 13 00:33:03.573020 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:33:03.571600 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:33:03.571675 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:33:03.575393 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:33:03.578233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:33:03.578316 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:33:03.581207 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:33:03.591450 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (519) May 13 00:33:03.595346 kernel: BTRFS: device fsid ffc5eb33-beca-4ca0-9735-b9a50e66f21e devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (528) May 13 00:33:03.594969 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:33:03.606620 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 13 00:33:03.608168 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:33:03.620722 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 13 00:33:03.625702 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:33:03.629896 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 13 00:33:03.631209 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 13 00:33:03.641473 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 00:33:03.643372 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 00:33:03.650207 disk-uuid[557]: Primary Header is updated. May 13 00:33:03.650207 disk-uuid[557]: Secondary Entries is updated. May 13 00:33:03.650207 disk-uuid[557]: Secondary Header is updated. May 13 00:33:03.653431 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:33:03.674290 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:33:04.667347 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 13 00:33:04.668352 disk-uuid[559]: The operation has completed successfully. May 13 00:33:04.689910 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 00:33:04.690011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 00:33:04.709537 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 00:33:04.714510 sh[581]: Success May 13 00:33:04.729376 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 13 00:33:04.759171 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 00:33:04.771868 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 00:33:04.775379 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 00:33:04.784068 kernel: BTRFS info (device dm-0): first mount of filesystem ffc5eb33-beca-4ca0-9735-b9a50e66f21e May 13 00:33:04.784111 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 13 00:33:04.784122 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 00:33:04.786105 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 00:33:04.786824 kernel: BTRFS info (device dm-0): using free space tree May 13 00:33:04.790305 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 00:33:04.791815 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 00:33:04.800539 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 00:33:04.802962 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 00:33:04.809781 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:33:04.809827 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:33:04.809845 kernel: BTRFS info (device vda6): using free space tree May 13 00:33:04.813351 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:33:04.820926 systemd[1]: mnt-oem.mount: Deactivated successfully. May 13 00:33:04.823040 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:33:04.828981 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 00:33:04.838536 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 00:33:04.919368 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:33:04.930540 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:33:04.943287 ignition[668]: Ignition 2.19.0 May 13 00:33:04.943303 ignition[668]: Stage: fetch-offline May 13 00:33:04.943375 ignition[668]: no configs at "/usr/lib/ignition/base.d" May 13 00:33:04.943385 ignition[668]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:33:04.943627 ignition[668]: parsed url from cmdline: "" May 13 00:33:04.943631 ignition[668]: no config URL provided May 13 00:33:04.943636 ignition[668]: reading system config file "/usr/lib/ignition/user.ign" May 13 00:33:04.943645 ignition[668]: no config at "/usr/lib/ignition/user.ign" May 13 00:33:04.943670 ignition[668]: op(1): [started] loading QEMU firmware config module May 13 00:33:04.943675 ignition[668]: op(1): executing: "modprobe" "qemu_fw_cfg" May 13 00:33:04.952847 ignition[668]: op(1): [finished] loading QEMU firmware config module May 13 00:33:04.957134 systemd-networkd[772]: lo: Link UP May 13 00:33:04.957144 systemd-networkd[772]: lo: Gained carrier May 13 00:33:04.958178 systemd-networkd[772]: Enumeration completed May 13 00:33:04.958787 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:33:04.958790 systemd-networkd[772]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:33:04.964385 ignition[668]: parsing config with SHA512: 87601f154c99a30a9d126b30822c40536c36441c61baed25f10c8e7dcfab4311695d03df2ecdaa7db26b05e98c942c5d5e5b86a8ae2d3a4fc8e544fce20df7f4 May 13 00:33:04.960230 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:33:04.960542 systemd-networkd[772]: eth0: Link UP May 13 00:33:04.960546 systemd-networkd[772]: eth0: Gained carrier May 13 00:33:04.967813 ignition[668]: fetch-offline: fetch-offline passed May 13 00:33:04.960552 systemd-networkd[772]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:33:04.967872 ignition[668]: Ignition finished successfully May 13 00:33:04.961641 systemd[1]: Reached target network.target - Network. May 13 00:33:04.967485 unknown[668]: fetched base config from "system" May 13 00:33:04.967492 unknown[668]: fetched user config from "qemu" May 13 00:33:04.969615 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:33:04.970928 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 13 00:33:04.975910 systemd-networkd[772]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:33:04.978500 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 00:33:04.990013 ignition[779]: Ignition 2.19.0 May 13 00:33:04.990035 ignition[779]: Stage: kargs May 13 00:33:04.990197 ignition[779]: no configs at "/usr/lib/ignition/base.d" May 13 00:33:04.990206 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:33:04.992750 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 00:33:04.990939 ignition[779]: kargs: kargs passed May 13 00:33:04.990983 ignition[779]: Ignition finished successfully May 13 00:33:05.002499 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 00:33:05.013403 ignition[788]: Ignition 2.19.0 May 13 00:33:05.013414 ignition[788]: Stage: disks May 13 00:33:05.013593 ignition[788]: no configs at "/usr/lib/ignition/base.d" May 13 00:33:05.013605 ignition[788]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:33:05.016137 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 00:33:05.014301 ignition[788]: disks: disks passed May 13 00:33:05.017966 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 00:33:05.014359 ignition[788]: Ignition finished successfully May 13 00:33:05.019703 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:33:05.021442 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:33:05.023288 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:33:05.024951 systemd[1]: Reached target basic.target - Basic System. May 13 00:33:05.041531 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 00:33:05.052611 systemd-fsck[798]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 13 00:33:05.057646 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 00:33:05.066482 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 00:33:05.115351 kernel: EXT4-fs (vda9): mounted filesystem 9903c37e-4e5a-41d4-80e5-5c3428d04b7e r/w with ordered data mode. Quota mode: none. May 13 00:33:05.116125 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 00:33:05.117423 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 00:33:05.134433 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:33:05.136879 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 00:33:05.137882 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 13 00:33:05.137924 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 00:33:05.137946 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:33:05.144087 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 00:33:05.146633 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 00:33:05.151074 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (806) May 13 00:33:05.151111 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:33:05.151125 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:33:05.151135 kernel: BTRFS info (device vda6): using free space tree May 13 00:33:05.155337 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:33:05.169611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:33:05.207162 initrd-setup-root[830]: cut: /sysroot/etc/passwd: No such file or directory May 13 00:33:05.211530 initrd-setup-root[837]: cut: /sysroot/etc/group: No such file or directory May 13 00:33:05.214853 initrd-setup-root[844]: cut: /sysroot/etc/shadow: No such file or directory May 13 00:33:05.219025 initrd-setup-root[851]: cut: /sysroot/etc/gshadow: No such file or directory May 13 00:33:05.301383 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 00:33:05.316472 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 00:33:05.318952 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 00:33:05.324349 kernel: BTRFS info (device vda6): last unmount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:33:05.342428 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 00:33:05.345409 ignition[919]: INFO : Ignition 2.19.0 May 13 00:33:05.346370 ignition[919]: INFO : Stage: mount May 13 00:33:05.346370 ignition[919]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:33:05.346370 ignition[919]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:33:05.349204 ignition[919]: INFO : mount: mount passed May 13 00:33:05.349204 ignition[919]: INFO : Ignition finished successfully May 13 00:33:05.349420 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 00:33:05.359410 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 00:33:05.782952 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 00:33:05.795505 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 00:33:05.803358 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (933) May 13 00:33:05.803403 kernel: BTRFS info (device vda6): first mount of filesystem 0068254f-7e0d-4c83-ad3e-204802432981 May 13 00:33:05.804680 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 13 00:33:05.804696 kernel: BTRFS info (device vda6): using free space tree May 13 00:33:05.807341 kernel: BTRFS info (device vda6): auto enabling async discard May 13 00:33:05.808682 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 00:33:05.835135 ignition[950]: INFO : Ignition 2.19.0 May 13 00:33:05.835135 ignition[950]: INFO : Stage: files May 13 00:33:05.836879 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:33:05.836879 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:33:05.836879 ignition[950]: DEBUG : files: compiled without relabeling support, skipping May 13 00:33:05.840849 ignition[950]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 00:33:05.840849 ignition[950]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 00:33:05.847298 ignition[950]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 00:33:05.848766 ignition[950]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 00:33:05.848766 ignition[950]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 00:33:05.848271 unknown[950]: wrote ssh authorized keys file for user: core May 13 00:33:05.853491 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:33:05.855224 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 13 00:33:06.114903 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 13 00:33:06.525685 ignition[950]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 13 00:33:06.525685 ignition[950]: INFO : files: op(8): [started] processing unit "containerd.service" May 13 00:33:06.529154 ignition[950]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:33:06.529154 ignition[950]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 13 00:33:06.529154 ignition[950]: INFO : files: op(8): [finished] processing unit "containerd.service" May 13 00:33:06.529154 ignition[950]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" May 13 00:33:06.529154 ignition[950]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:33:06.529154 ignition[950]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 13 00:33:06.529154 ignition[950]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" May 13 00:33:06.529154 ignition[950]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" May 13 00:33:06.562449 ignition[950]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:33:06.568777 ignition[950]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 13 00:33:06.568777 ignition[950]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" May 13 00:33:06.568777 ignition[950]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 00:33:06.573483 ignition[950]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 00:33:06.573483 ignition[950]: INFO : files: files passed May 13 00:33:06.573483 ignition[950]: INFO : Ignition finished successfully May 13 00:33:06.570775 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 00:33:06.587525 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 00:33:06.591115 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 00:33:06.594663 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 00:33:06.594751 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 00:33:06.600686 initrd-setup-root-after-ignition[979]: grep: /sysroot/oem/oem-release: No such file or directory May 13 00:33:06.603820 initrd-setup-root-after-ignition[981]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:33:06.603820 initrd-setup-root-after-ignition[981]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 00:33:06.611749 initrd-setup-root-after-ignition[985]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 00:33:06.614831 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:33:06.616243 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 00:33:06.622467 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 00:33:06.632543 systemd-networkd[772]: eth0: Gained IPv6LL May 13 00:33:06.645850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 00:33:06.645942 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 00:33:06.647675 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 00:33:06.649534 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 00:33:06.651612 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 00:33:06.652441 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 00:33:06.673219 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:33:06.681558 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 00:33:06.689279 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 00:33:06.690536 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:33:06.692593 systemd[1]: Stopped target timers.target - Timer Units. May 13 00:33:06.694265 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 00:33:06.694406 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 00:33:06.696847 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 00:33:06.698779 systemd[1]: Stopped target basic.target - Basic System. May 13 00:33:06.700370 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 00:33:06.702054 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 00:33:06.704082 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 00:33:06.706032 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 00:33:06.707826 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 00:33:06.709721 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 00:33:06.711674 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 00:33:06.713340 systemd[1]: Stopped target swap.target - Swaps. May 13 00:33:06.714883 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 00:33:06.715009 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 00:33:06.717312 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 00:33:06.719361 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:33:06.721378 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 00:33:06.722402 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:33:06.724474 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 00:33:06.724592 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 00:33:06.727382 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 00:33:06.727499 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 00:33:06.729542 systemd[1]: Stopped target paths.target - Path Units. May 13 00:33:06.731084 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 00:33:06.734393 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:33:06.735647 systemd[1]: Stopped target slices.target - Slice Units. May 13 00:33:06.737687 systemd[1]: Stopped target sockets.target - Socket Units. May 13 00:33:06.739226 systemd[1]: iscsid.socket: Deactivated successfully. May 13 00:33:06.739315 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 00:33:06.741005 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 00:33:06.741086 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 00:33:06.742601 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 00:33:06.742704 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 00:33:06.744471 systemd[1]: ignition-files.service: Deactivated successfully. May 13 00:33:06.744567 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 00:33:06.757490 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 00:33:06.758419 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 00:33:06.758542 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:33:06.761356 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 00:33:06.763158 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 00:33:06.763298 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:33:06.765114 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 00:33:06.765209 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 00:33:06.770568 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 00:33:06.772735 ignition[1006]: INFO : Ignition 2.19.0 May 13 00:33:06.772735 ignition[1006]: INFO : Stage: umount May 13 00:33:06.776933 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 00:33:06.776933 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 13 00:33:06.776933 ignition[1006]: INFO : umount: umount passed May 13 00:33:06.776933 ignition[1006]: INFO : Ignition finished successfully May 13 00:33:06.777356 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 00:33:06.779276 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 00:33:06.779395 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 00:33:06.782288 systemd[1]: Stopped target network.target - Network. May 13 00:33:06.783201 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 00:33:06.783295 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 00:33:06.785164 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 00:33:06.785207 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 00:33:06.791989 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 00:33:06.792036 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 00:33:06.793818 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 00:33:06.793863 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 00:33:06.796995 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 00:33:06.798629 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 00:33:06.802463 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 00:33:06.803070 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 00:33:06.803169 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 00:33:06.806810 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 00:33:06.806891 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 00:33:06.809409 systemd-networkd[772]: eth0: DHCPv6 lease lost May 13 00:33:06.812824 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 00:33:06.812934 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 00:33:06.815103 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 00:33:06.816269 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 00:33:06.822717 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 00:33:06.822772 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 00:33:06.836468 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 00:33:06.837634 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 00:33:06.837776 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 00:33:06.841126 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 00:33:06.841181 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 00:33:06.843045 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 00:33:06.843093 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 00:33:06.846608 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 00:33:06.846661 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:33:06.848644 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:33:06.861685 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 00:33:06.861802 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 00:33:06.868980 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 00:33:06.869124 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:33:06.871411 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 00:33:06.871453 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 00:33:06.873353 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 00:33:06.873386 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:33:06.875174 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 00:33:06.875229 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 00:33:06.877961 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 00:33:06.878009 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 00:33:06.880706 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 00:33:06.880752 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 00:33:06.898561 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 00:33:06.899632 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 00:33:06.899695 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:33:06.901875 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 00:33:06.901922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:33:06.904093 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 00:33:06.904394 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 00:33:06.906393 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 00:33:06.909949 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 00:33:06.922997 systemd[1]: Switching root. May 13 00:33:06.954241 systemd-journald[238]: Journal stopped May 13 00:33:07.685938 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). May 13 00:33:07.685990 kernel: SELinux: policy capability network_peer_controls=1 May 13 00:33:07.686002 kernel: SELinux: policy capability open_perms=1 May 13 00:33:07.686012 kernel: SELinux: policy capability extended_socket_class=1 May 13 00:33:07.686027 kernel: SELinux: policy capability always_check_network=0 May 13 00:33:07.686039 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 00:33:07.686049 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 00:33:07.686058 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 00:33:07.686068 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 00:33:07.686081 kernel: audit: type=1403 audit(1747096387.124:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 00:33:07.686091 systemd[1]: Successfully loaded SELinux policy in 33.718ms. May 13 00:33:07.686107 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.043ms. May 13 00:33:07.686133 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 13 00:33:07.686147 systemd[1]: Detected virtualization kvm. May 13 00:33:07.686158 systemd[1]: Detected architecture arm64. May 13 00:33:07.686168 systemd[1]: Detected first boot. May 13 00:33:07.686178 systemd[1]: Initializing machine ID from VM UUID. May 13 00:33:07.686188 zram_generator::config[1071]: No configuration found. May 13 00:33:07.686201 systemd[1]: Populated /etc with preset unit settings. May 13 00:33:07.686211 systemd[1]: Queued start job for default target multi-user.target. May 13 00:33:07.686222 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 13 00:33:07.686235 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 00:33:07.686246 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 00:33:07.686269 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 00:33:07.686281 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 00:33:07.686292 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 00:33:07.686303 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 00:33:07.686316 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 00:33:07.686339 systemd[1]: Created slice user.slice - User and Session Slice. May 13 00:33:07.686351 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 00:33:07.686362 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 00:33:07.686373 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 00:33:07.686384 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 00:33:07.686394 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 00:33:07.686405 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 00:33:07.686416 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 13 00:33:07.686428 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 00:33:07.686438 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 00:33:07.686449 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 00:33:07.686463 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 00:33:07.686474 systemd[1]: Reached target slices.target - Slice Units. May 13 00:33:07.686485 systemd[1]: Reached target swap.target - Swaps. May 13 00:33:07.686497 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 00:33:07.686508 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 00:33:07.686520 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 00:33:07.686532 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 13 00:33:07.686543 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 00:33:07.686554 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 00:33:07.686564 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 00:33:07.686575 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 00:33:07.686586 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 00:33:07.686596 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 00:33:07.686607 systemd[1]: Mounting media.mount - External Media Directory... May 13 00:33:07.686618 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 00:33:07.686629 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 00:33:07.686639 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 00:33:07.686650 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 00:33:07.686660 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:33:07.686671 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 00:33:07.686681 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 00:33:07.686692 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:33:07.686702 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:33:07.686714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:33:07.686726 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 00:33:07.686737 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:33:07.686747 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 00:33:07.686758 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 13 00:33:07.686769 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 13 00:33:07.686779 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 00:33:07.686790 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 00:33:07.686801 kernel: fuse: init (API version 7.39) May 13 00:33:07.686811 kernel: ACPI: bus type drm_connector registered May 13 00:33:07.686821 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 00:33:07.686832 kernel: loop: module loaded May 13 00:33:07.686842 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 00:33:07.686852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 00:33:07.686880 systemd-journald[1150]: Collecting audit messages is disabled. May 13 00:33:07.686902 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 00:33:07.686915 systemd-journald[1150]: Journal started May 13 00:33:07.686936 systemd-journald[1150]: Runtime Journal (/run/log/journal/d0ea4fcbd0f34e0c998e5a98b7e78d4f) is 5.9M, max 47.3M, 41.4M free. May 13 00:33:07.690898 systemd[1]: Started systemd-journald.service - Journal Service. May 13 00:33:07.693760 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 00:33:07.695112 systemd[1]: Mounted media.mount - External Media Directory. May 13 00:33:07.696295 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 00:33:07.697517 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 00:33:07.698814 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 00:33:07.700080 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 00:33:07.701673 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 00:33:07.703260 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 00:33:07.703448 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 00:33:07.705021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:33:07.705168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:33:07.706624 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:33:07.706773 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:33:07.708213 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:33:07.708390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:33:07.709872 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 00:33:07.710017 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 00:33:07.711441 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:33:07.711645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:33:07.713136 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 00:33:07.714762 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 00:33:07.716509 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 00:33:07.727219 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 00:33:07.740423 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 00:33:07.742517 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 00:33:07.743656 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 00:33:07.745914 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 00:33:07.748052 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 00:33:07.749261 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:33:07.751492 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 00:33:07.752852 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:33:07.755927 systemd-journald[1150]: Time spent on flushing to /var/log/journal/d0ea4fcbd0f34e0c998e5a98b7e78d4f is 17.712ms for 826 entries. May 13 00:33:07.755927 systemd-journald[1150]: System Journal (/var/log/journal/d0ea4fcbd0f34e0c998e5a98b7e78d4f) is 8.0M, max 195.6M, 187.6M free. May 13 00:33:07.792047 systemd-journald[1150]: Received client request to flush runtime journal. May 13 00:33:07.756532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 00:33:07.759032 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 00:33:07.762770 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 00:33:07.764355 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 00:33:07.765642 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 00:33:07.779484 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 00:33:07.781043 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 00:33:07.783692 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 00:33:07.785950 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 00:33:07.786460 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. May 13 00:33:07.786472 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. May 13 00:33:07.790349 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 00:33:07.795476 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 00:33:07.810581 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 00:33:07.812025 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 00:33:07.828952 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 00:33:07.842542 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 00:33:07.853727 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. May 13 00:33:07.853749 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. May 13 00:33:07.857577 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 00:33:08.188424 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 00:33:08.201491 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 00:33:08.220612 systemd-udevd[1229]: Using default interface naming scheme 'v255'. May 13 00:33:08.233615 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 00:33:08.243501 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 00:33:08.265349 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1243) May 13 00:33:08.265563 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 00:33:08.271550 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 13 00:33:08.297215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 13 00:33:08.305645 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 00:33:08.371422 systemd-networkd[1237]: lo: Link UP May 13 00:33:08.371431 systemd-networkd[1237]: lo: Gained carrier May 13 00:33:08.372120 systemd-networkd[1237]: Enumeration completed May 13 00:33:08.372585 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:33:08.372594 systemd-networkd[1237]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 00:33:08.373233 systemd-networkd[1237]: eth0: Link UP May 13 00:33:08.373243 systemd-networkd[1237]: eth0: Gained carrier May 13 00:33:08.373264 systemd-networkd[1237]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 00:33:08.382469 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 00:33:08.383871 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 00:33:08.385599 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 00:33:08.388728 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 00:33:08.391102 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 00:33:08.393376 systemd-networkd[1237]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 13 00:33:08.401432 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:33:08.415887 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 00:33:08.437308 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 00:33:08.439151 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 00:33:08.449460 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 00:33:08.454417 lvm[1275]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 00:33:08.495694 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 00:33:08.497086 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 00:33:08.498386 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 00:33:08.498420 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 00:33:08.499391 systemd[1]: Reached target machines.target - Containers. May 13 00:33:08.501310 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 13 00:33:08.521444 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 00:33:08.523711 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 00:33:08.524802 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:33:08.525664 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 00:33:08.530216 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 13 00:33:08.535773 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 00:33:08.539993 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 00:33:08.541715 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 00:33:08.547351 kernel: loop0: detected capacity change from 0 to 114328 May 13 00:33:08.552808 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 00:33:08.555828 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 13 00:33:08.560335 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 00:33:08.592351 kernel: loop1: detected capacity change from 0 to 194096 May 13 00:33:08.636404 kernel: loop2: detected capacity change from 0 to 114432 May 13 00:33:08.678352 kernel: loop3: detected capacity change from 0 to 114328 May 13 00:33:08.684405 kernel: loop4: detected capacity change from 0 to 194096 May 13 00:33:08.690371 kernel: loop5: detected capacity change from 0 to 114432 May 13 00:33:08.693580 (sd-merge)[1298]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 13 00:33:08.693928 (sd-merge)[1298]: Merged extensions into '/usr'. May 13 00:33:08.698269 systemd[1]: Reloading requested from client PID 1283 ('systemd-sysext') (unit systemd-sysext.service)... May 13 00:33:08.698404 systemd[1]: Reloading... May 13 00:33:08.743363 zram_generator::config[1327]: No configuration found. May 13 00:33:08.760241 ldconfig[1279]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 00:33:08.840023 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:33:08.884301 systemd[1]: Reloading finished in 184 ms. May 13 00:33:08.900974 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 00:33:08.902452 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 00:33:08.917513 systemd[1]: Starting ensure-sysext.service... May 13 00:33:08.919368 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 00:33:08.924666 systemd[1]: Reloading requested from client PID 1368 ('systemctl') (unit ensure-sysext.service)... May 13 00:33:08.924680 systemd[1]: Reloading... May 13 00:33:08.936202 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 00:33:08.936534 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 00:33:08.937169 systemd-tmpfiles[1369]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 00:33:08.937430 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. May 13 00:33:08.937483 systemd-tmpfiles[1369]: ACLs are not supported, ignoring. May 13 00:33:08.940128 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:33:08.940143 systemd-tmpfiles[1369]: Skipping /boot May 13 00:33:08.947193 systemd-tmpfiles[1369]: Detected autofs mount point /boot during canonicalization of boot. May 13 00:33:08.947206 systemd-tmpfiles[1369]: Skipping /boot May 13 00:33:08.969472 zram_generator::config[1398]: No configuration found. May 13 00:33:09.061610 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:33:09.106579 systemd[1]: Reloading finished in 181 ms. May 13 00:33:09.120020 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 00:33:09.141466 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:33:09.143773 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 00:33:09.148464 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 00:33:09.151483 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 00:33:09.156419 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 00:33:09.166354 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:33:09.171579 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:33:09.173566 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:33:09.181668 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:33:09.182882 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:33:09.183740 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 00:33:09.186828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:33:09.186959 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:33:09.188889 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:33:09.189021 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:33:09.190603 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:33:09.190807 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:33:09.196460 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:33:09.196648 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:33:09.207635 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 00:33:09.209646 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 00:33:09.213754 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 00:33:09.218223 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:33:09.218655 augenrules[1475]: No rules May 13 00:33:09.231574 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:33:09.233702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:33:09.233727 systemd-resolved[1445]: Positive Trust Anchors: May 13 00:33:09.233737 systemd-resolved[1445]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 00:33:09.233769 systemd-resolved[1445]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 00:33:09.237568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:33:09.238599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:33:09.238716 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:33:09.239168 systemd-resolved[1445]: Defaulting to hostname 'linux'. May 13 00:33:09.239692 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:33:09.241423 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 00:33:09.242958 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:33:09.243099 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:33:09.244612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:33:09.244752 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:33:09.246320 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 00:33:09.247699 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:33:09.247888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:33:09.253654 systemd[1]: Reached target network.target - Network. May 13 00:33:09.254668 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 00:33:09.255997 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 00:33:09.271546 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 00:33:09.273548 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 00:33:09.275490 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 00:33:09.280582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 00:33:09.281623 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 00:33:09.281769 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 00:33:09.282716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 00:33:09.282858 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 00:33:09.284331 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 00:33:09.284468 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 00:33:09.285910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 00:33:09.286037 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 00:33:09.287623 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 00:33:09.287804 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 00:33:09.290676 systemd[1]: Finished ensure-sysext.service. May 13 00:33:09.294674 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 00:33:09.294724 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 00:33:09.306543 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 13 00:33:09.354686 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 13 00:33:08.912722 systemd-resolved[1445]: Clock change detected. Flushing caches. May 13 00:33:08.917867 systemd-journald[1150]: Time jumped backwards, rotating. May 13 00:33:08.912771 systemd-timesyncd[1512]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 13 00:33:08.912816 systemd-timesyncd[1512]: Initial clock synchronization to Tue 2025-05-13 00:33:08.912667 UTC. May 13 00:33:08.914358 systemd[1]: Reached target sysinit.target - System Initialization. May 13 00:33:08.915616 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 00:33:08.916958 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 00:33:08.918266 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 00:33:08.919503 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 00:33:08.919545 systemd[1]: Reached target paths.target - Path Units. May 13 00:33:08.920461 systemd[1]: Reached target time-set.target - System Time Set. May 13 00:33:08.923263 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 00:33:08.924438 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 00:33:08.925878 systemd[1]: Reached target timers.target - Timer Units. May 13 00:33:08.927393 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 00:33:08.929848 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 00:33:08.931898 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 00:33:08.937517 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 00:33:08.938614 systemd[1]: Reached target sockets.target - Socket Units. May 13 00:33:08.939536 systemd[1]: Reached target basic.target - Basic System. May 13 00:33:08.940640 systemd[1]: System is tainted: cgroupsv1 May 13 00:33:08.940685 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 00:33:08.940705 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 00:33:08.941703 systemd[1]: Starting containerd.service - containerd container runtime... May 13 00:33:08.943631 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 00:33:08.946490 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 00:33:08.950724 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 00:33:08.955334 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 00:33:08.958807 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 00:33:08.963617 jq[1519]: false May 13 00:33:08.962485 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 00:33:08.964269 extend-filesystems[1521]: Found loop3 May 13 00:33:08.965204 extend-filesystems[1521]: Found loop4 May 13 00:33:08.965204 extend-filesystems[1521]: Found loop5 May 13 00:33:08.965204 extend-filesystems[1521]: Found vda May 13 00:33:08.965204 extend-filesystems[1521]: Found vda1 May 13 00:33:08.965204 extend-filesystems[1521]: Found vda2 May 13 00:33:08.965204 extend-filesystems[1521]: Found vda3 May 13 00:33:08.965204 extend-filesystems[1521]: Found usr May 13 00:33:08.965204 extend-filesystems[1521]: Found vda4 May 13 00:33:08.965204 extend-filesystems[1521]: Found vda6 May 13 00:33:08.965204 extend-filesystems[1521]: Found vda7 May 13 00:33:08.965204 extend-filesystems[1521]: Found vda9 May 13 00:33:08.965204 extend-filesystems[1521]: Checking size of /dev/vda9 May 13 00:33:08.976724 dbus-daemon[1518]: [system] SELinux support is enabled May 13 00:33:08.968954 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 00:33:08.976109 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 00:33:08.983575 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 00:33:08.985473 systemd[1]: Starting update-engine.service - Update Engine... May 13 00:33:08.988641 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 00:33:08.992757 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 00:33:08.995157 extend-filesystems[1521]: Resized partition /dev/vda9 May 13 00:33:08.996443 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 00:33:08.996737 jq[1541]: true May 13 00:33:08.996729 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 00:33:08.996980 systemd[1]: motdgen.service: Deactivated successfully. May 13 00:33:08.997171 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 00:33:09.002447 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 00:33:09.002750 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 00:33:09.008552 extend-filesystems[1545]: resize2fs 1.47.1 (20-May-2024) May 13 00:33:09.012916 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 00:33:09.012970 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 00:33:09.017206 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 00:33:09.017241 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 00:33:09.021656 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 13 00:33:09.021717 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1236) May 13 00:33:09.041637 update_engine[1538]: I20250513 00:33:09.041394 1538 main.cc:92] Flatcar Update Engine starting May 13 00:33:09.042704 jq[1547]: true May 13 00:33:09.046619 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 13 00:33:09.046821 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 00:33:09.049825 systemd[1]: Started update-engine.service - Update Engine. May 13 00:33:09.060332 update_engine[1538]: I20250513 00:33:09.049114 1538 update_check_scheduler.cc:74] Next update check in 2m32s May 13 00:33:09.051914 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 00:33:09.053007 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 00:33:09.063608 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 13 00:33:09.063608 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 1 May 13 00:33:09.063608 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 13 00:33:09.070898 extend-filesystems[1521]: Resized filesystem in /dev/vda9 May 13 00:33:09.065946 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 00:33:09.066177 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 00:33:09.074706 systemd-logind[1535]: Watching system buttons on /dev/input/event0 (Power Button) May 13 00:33:09.075147 systemd-logind[1535]: New seat seat0. May 13 00:33:09.077574 systemd[1]: Started systemd-logind.service - User Login Management. May 13 00:33:09.111105 bash[1577]: Updated "/home/core/.ssh/authorized_keys" May 13 00:33:09.113486 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 00:33:09.116132 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 00:33:09.124280 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 00:33:09.234328 containerd[1556]: time="2025-05-13T00:33:09.234151562Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 13 00:33:09.258654 containerd[1556]: time="2025-05-13T00:33:09.258500362Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 13 00:33:09.260186 containerd[1556]: time="2025-05-13T00:33:09.260127122Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.89-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 13 00:33:09.260186 containerd[1556]: time="2025-05-13T00:33:09.260161802Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 13 00:33:09.260186 containerd[1556]: time="2025-05-13T00:33:09.260179162Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 13 00:33:09.260447 containerd[1556]: time="2025-05-13T00:33:09.260340322Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 13 00:33:09.260447 containerd[1556]: time="2025-05-13T00:33:09.260366482Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 13 00:33:09.260447 containerd[1556]: time="2025-05-13T00:33:09.260429642Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:33:09.260503 containerd[1556]: time="2025-05-13T00:33:09.260460842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 13 00:33:09.260710 containerd[1556]: time="2025-05-13T00:33:09.260688082Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:33:09.260710 containerd[1556]: time="2025-05-13T00:33:09.260708202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 13 00:33:09.260761 containerd[1556]: time="2025-05-13T00:33:09.260721762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:33:09.260761 containerd[1556]: time="2025-05-13T00:33:09.260737962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 13 00:33:09.260825 containerd[1556]: time="2025-05-13T00:33:09.260807802Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 13 00:33:09.261040 containerd[1556]: time="2025-05-13T00:33:09.261005202Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 13 00:33:09.261160 containerd[1556]: time="2025-05-13T00:33:09.261141242Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 13 00:33:09.261183 containerd[1556]: time="2025-05-13T00:33:09.261158922Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 13 00:33:09.261245 containerd[1556]: time="2025-05-13T00:33:09.261231482Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 13 00:33:09.261292 containerd[1556]: time="2025-05-13T00:33:09.261280562Z" level=info msg="metadata content store policy set" policy=shared May 13 00:33:09.264373 containerd[1556]: time="2025-05-13T00:33:09.264339762Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 13 00:33:09.264398 containerd[1556]: time="2025-05-13T00:33:09.264384442Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 13 00:33:09.264416 containerd[1556]: time="2025-05-13T00:33:09.264399882Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 13 00:33:09.264432 containerd[1556]: time="2025-05-13T00:33:09.264415242Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 13 00:33:09.264460 containerd[1556]: time="2025-05-13T00:33:09.264431242Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 13 00:33:09.264596 containerd[1556]: time="2025-05-13T00:33:09.264565802Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 13 00:33:09.265584 containerd[1556]: time="2025-05-13T00:33:09.265547962Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 13 00:33:09.265735 containerd[1556]: time="2025-05-13T00:33:09.265708442Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 13 00:33:09.265760 containerd[1556]: time="2025-05-13T00:33:09.265734162Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 13 00:33:09.265760 containerd[1556]: time="2025-05-13T00:33:09.265748562Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 13 00:33:09.265792 containerd[1556]: time="2025-05-13T00:33:09.265763162Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 13 00:33:09.265792 containerd[1556]: time="2025-05-13T00:33:09.265777802Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 13 00:33:09.265839 containerd[1556]: time="2025-05-13T00:33:09.265790562Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 13 00:33:09.265839 containerd[1556]: time="2025-05-13T00:33:09.265804522Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 13 00:33:09.265839 containerd[1556]: time="2025-05-13T00:33:09.265819482Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 13 00:33:09.265839 containerd[1556]: time="2025-05-13T00:33:09.265832962Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 13 00:33:09.265903 containerd[1556]: time="2025-05-13T00:33:09.265845442Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 13 00:33:09.265903 containerd[1556]: time="2025-05-13T00:33:09.265863242Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 13 00:33:09.265903 containerd[1556]: time="2025-05-13T00:33:09.265883882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 13 00:33:09.265903 containerd[1556]: time="2025-05-13T00:33:09.265896682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 13 00:33:09.265969 containerd[1556]: time="2025-05-13T00:33:09.265908922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 13 00:33:09.265969 containerd[1556]: time="2025-05-13T00:33:09.265921802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 13 00:33:09.265969 containerd[1556]: time="2025-05-13T00:33:09.265933882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 13 00:33:09.265969 containerd[1556]: time="2025-05-13T00:33:09.265949042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 13 00:33:09.265969 containerd[1556]: time="2025-05-13T00:33:09.265960722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266051 containerd[1556]: time="2025-05-13T00:33:09.265975882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266051 containerd[1556]: time="2025-05-13T00:33:09.265989402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266051 containerd[1556]: time="2025-05-13T00:33:09.266002962Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266051 containerd[1556]: time="2025-05-13T00:33:09.266013882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266051 containerd[1556]: time="2025-05-13T00:33:09.266025602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266051 containerd[1556]: time="2025-05-13T00:33:09.266038162Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266144 containerd[1556]: time="2025-05-13T00:33:09.266054522Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 13 00:33:09.266144 containerd[1556]: time="2025-05-13T00:33:09.266074802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266144 containerd[1556]: time="2025-05-13T00:33:09.266086362Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266144 containerd[1556]: time="2025-05-13T00:33:09.266096922Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 13 00:33:09.266230 containerd[1556]: time="2025-05-13T00:33:09.266215202Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 13 00:33:09.266251 containerd[1556]: time="2025-05-13T00:33:09.266240082Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 13 00:33:09.266270 containerd[1556]: time="2025-05-13T00:33:09.266252082Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 13 00:33:09.266288 containerd[1556]: time="2025-05-13T00:33:09.266267162Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 13 00:33:09.266288 containerd[1556]: time="2025-05-13T00:33:09.266277282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266324 containerd[1556]: time="2025-05-13T00:33:09.266289642Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 13 00:33:09.266324 containerd[1556]: time="2025-05-13T00:33:09.266299642Z" level=info msg="NRI interface is disabled by configuration." May 13 00:33:09.266324 containerd[1556]: time="2025-05-13T00:33:09.266310562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 13 00:33:09.266764 containerd[1556]: time="2025-05-13T00:33:09.266668602Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 13 00:33:09.266864 containerd[1556]: time="2025-05-13T00:33:09.266767682Z" level=info msg="Connect containerd service" May 13 00:33:09.266864 containerd[1556]: time="2025-05-13T00:33:09.266795202Z" level=info msg="using legacy CRI server" May 13 00:33:09.266864 containerd[1556]: time="2025-05-13T00:33:09.266801602Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 00:33:09.266958 containerd[1556]: time="2025-05-13T00:33:09.266900442Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 13 00:33:09.267492 containerd[1556]: time="2025-05-13T00:33:09.267453602Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:33:09.267718 containerd[1556]: time="2025-05-13T00:33:09.267672082Z" level=info msg="Start subscribing containerd event" May 13 00:33:09.267850 containerd[1556]: time="2025-05-13T00:33:09.267836722Z" level=info msg="Start recovering state" May 13 00:33:09.267969 containerd[1556]: time="2025-05-13T00:33:09.267955322Z" level=info msg="Start event monitor" May 13 00:33:09.268011 containerd[1556]: time="2025-05-13T00:33:09.267974922Z" level=info msg="Start snapshots syncer" May 13 00:33:09.268011 containerd[1556]: time="2025-05-13T00:33:09.267960562Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 00:33:09.268047 containerd[1556]: time="2025-05-13T00:33:09.268032122Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 00:33:09.268066 containerd[1556]: time="2025-05-13T00:33:09.267987962Z" level=info msg="Start cni network conf syncer for default" May 13 00:33:09.268066 containerd[1556]: time="2025-05-13T00:33:09.268060562Z" level=info msg="Start streaming server" May 13 00:33:09.268993 containerd[1556]: time="2025-05-13T00:33:09.268129242Z" level=info msg="containerd successfully booted in 0.035425s" May 13 00:33:09.268232 systemd[1]: Started containerd.service - containerd container runtime. May 13 00:33:09.837933 systemd-networkd[1237]: eth0: Gained IPv6LL May 13 00:33:09.840264 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 00:33:09.842142 systemd[1]: Reached target network-online.target - Network is Online. May 13 00:33:09.854176 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:33:09.857377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:33:09.860260 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 00:33:09.884098 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 00:33:09.885870 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:33:09.886214 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:33:09.888344 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 00:33:10.356627 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:33:10.360953 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 00:33:10.562862 sshd_keygen[1540]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 00:33:10.583764 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 00:33:10.598930 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 00:33:10.604559 systemd[1]: issuegen.service: Deactivated successfully. May 13 00:33:10.604826 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 00:33:10.607501 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 00:33:10.618685 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 00:33:10.621531 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 00:33:10.623718 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 13 00:33:10.625053 systemd[1]: Reached target getty.target - Login Prompts. May 13 00:33:10.626077 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 00:33:10.627427 systemd[1]: Startup finished in 4.968s (kernel) + 3.981s (userspace) = 8.949s. May 13 00:33:10.847898 kubelet[1626]: E0513 00:33:10.847846 1626 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 00:33:10.850406 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 00:33:10.850598 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 00:33:15.155853 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 00:33:15.169882 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:42588.service - OpenSSH per-connection server daemon (10.0.0.1:42588). May 13 00:33:15.229717 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 42588 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:33:15.230444 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:33:15.239582 systemd-logind[1535]: New session 1 of user core. May 13 00:33:15.240451 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 00:33:15.255915 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 00:33:15.266474 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 00:33:15.271882 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 00:33:15.287954 (systemd)[1665]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 00:33:15.383735 systemd[1665]: Queued start job for default target default.target. May 13 00:33:15.384125 systemd[1665]: Created slice app.slice - User Application Slice. May 13 00:33:15.384148 systemd[1665]: Reached target paths.target - Paths. May 13 00:33:15.384159 systemd[1665]: Reached target timers.target - Timers. May 13 00:33:15.391762 systemd[1665]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 00:33:15.402363 systemd[1665]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 00:33:15.402722 systemd[1665]: Reached target sockets.target - Sockets. May 13 00:33:15.402749 systemd[1665]: Reached target basic.target - Basic System. May 13 00:33:15.402797 systemd[1665]: Reached target default.target - Main User Target. May 13 00:33:15.402822 systemd[1665]: Startup finished in 100ms. May 13 00:33:15.402926 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 00:33:15.404078 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 00:33:15.463838 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:42590.service - OpenSSH per-connection server daemon (10.0.0.1:42590). May 13 00:33:15.518720 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 42590 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:33:15.520121 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:33:15.525286 systemd-logind[1535]: New session 2 of user core. May 13 00:33:15.534927 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 00:33:15.588631 sshd[1677]: pam_unix(sshd:session): session closed for user core May 13 00:33:15.602222 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:42594.service - OpenSSH per-connection server daemon (10.0.0.1:42594). May 13 00:33:15.602797 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:42590.service: Deactivated successfully. May 13 00:33:15.605322 systemd[1]: session-2.scope: Deactivated successfully. May 13 00:33:15.606833 systemd-logind[1535]: Session 2 logged out. Waiting for processes to exit. May 13 00:33:15.607829 systemd-logind[1535]: Removed session 2. May 13 00:33:15.636696 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 42594 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:33:15.637214 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:33:15.643013 systemd-logind[1535]: New session 3 of user core. May 13 00:33:15.655946 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 00:33:15.710819 sshd[1682]: pam_unix(sshd:session): session closed for user core May 13 00:33:15.721910 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:42610.service - OpenSSH per-connection server daemon (10.0.0.1:42610). May 13 00:33:15.722345 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:42594.service: Deactivated successfully. May 13 00:33:15.723845 systemd[1]: session-3.scope: Deactivated successfully. May 13 00:33:15.724853 systemd-logind[1535]: Session 3 logged out. Waiting for processes to exit. May 13 00:33:15.730358 systemd-logind[1535]: Removed session 3. May 13 00:33:15.764161 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 42610 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:33:15.765547 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:33:15.769617 systemd-logind[1535]: New session 4 of user core. May 13 00:33:15.784933 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 00:33:15.842314 sshd[1690]: pam_unix(sshd:session): session closed for user core May 13 00:33:15.849917 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:42620.service - OpenSSH per-connection server daemon (10.0.0.1:42620). May 13 00:33:15.850327 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:42610.service: Deactivated successfully. May 13 00:33:15.857773 systemd[1]: session-4.scope: Deactivated successfully. May 13 00:33:15.859184 systemd-logind[1535]: Session 4 logged out. Waiting for processes to exit. May 13 00:33:15.861316 systemd-logind[1535]: Removed session 4. May 13 00:33:15.889908 sshd[1698]: Accepted publickey for core from 10.0.0.1 port 42620 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:33:15.891399 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:33:15.898325 systemd-logind[1535]: New session 5 of user core. May 13 00:33:15.911950 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 00:33:15.993594 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 13 00:33:15.993901 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:33:16.009593 sudo[1705]: pam_unix(sudo:session): session closed for user root May 13 00:33:16.011920 sshd[1698]: pam_unix(sshd:session): session closed for user core May 13 00:33:16.029873 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:42634.service - OpenSSH per-connection server daemon (10.0.0.1:42634). May 13 00:33:16.030252 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:42620.service: Deactivated successfully. May 13 00:33:16.036022 systemd-logind[1535]: Session 5 logged out. Waiting for processes to exit. May 13 00:33:16.036100 systemd[1]: session-5.scope: Deactivated successfully. May 13 00:33:16.038304 systemd-logind[1535]: Removed session 5. May 13 00:33:16.075665 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 42634 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:33:16.078084 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:33:16.084070 systemd-logind[1535]: New session 6 of user core. May 13 00:33:16.090898 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 00:33:16.150015 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 13 00:33:16.150298 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:33:16.153876 sudo[1715]: pam_unix(sudo:session): session closed for user root May 13 00:33:16.160163 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 13 00:33:16.160839 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:33:16.185199 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 13 00:33:16.186781 auditctl[1718]: No rules May 13 00:33:16.187715 systemd[1]: audit-rules.service: Deactivated successfully. May 13 00:33:16.187975 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 13 00:33:16.189842 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 13 00:33:16.215616 augenrules[1737]: No rules May 13 00:33:16.217035 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 13 00:33:16.219177 sudo[1714]: pam_unix(sudo:session): session closed for user root May 13 00:33:16.222767 sshd[1707]: pam_unix(sshd:session): session closed for user core May 13 00:33:16.231979 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:42644.service - OpenSSH per-connection server daemon (10.0.0.1:42644). May 13 00:33:16.232445 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:42634.service: Deactivated successfully. May 13 00:33:16.234305 systemd-logind[1535]: Session 6 logged out. Waiting for processes to exit. May 13 00:33:16.235108 systemd[1]: session-6.scope: Deactivated successfully. May 13 00:33:16.236391 systemd-logind[1535]: Removed session 6. May 13 00:33:16.262750 sshd[1743]: Accepted publickey for core from 10.0.0.1 port 42644 ssh2: RSA SHA256:ilwLBGyeejLKSU0doRti0j2W4iQ88Tp+35jhkd0iwiU May 13 00:33:16.264865 sshd[1743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 00:33:16.270189 systemd-logind[1535]: New session 7 of user core. May 13 00:33:16.280960 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 00:33:16.335237 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 00:33:16.335537 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 00:33:16.363479 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 13 00:33:16.380425 systemd[1]: coreos-metadata.service: Deactivated successfully. May 13 00:33:16.380757 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 13 00:33:16.928218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:33:16.940827 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:33:16.956370 systemd[1]: Reloading requested from client PID 1804 ('systemctl') (unit session-7.scope)... May 13 00:33:16.956393 systemd[1]: Reloading... May 13 00:33:17.028682 zram_generator::config[1841]: No configuration found. May 13 00:33:17.147427 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 00:33:17.200428 systemd[1]: Reloading finished in 243 ms. May 13 00:33:17.239423 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 00:33:17.239531 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 00:33:17.239852 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:33:17.241636 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 00:33:17.333080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 00:33:17.337903 (kubelet)[1900]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 13 00:33:17.375713 kubelet[1900]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:33:17.375713 kubelet[1900]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 13 00:33:17.375713 kubelet[1900]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 13 00:33:17.376677 kubelet[1900]: I0513 00:33:17.376629 1900 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 13 00:33:18.285631 kubelet[1900]: I0513 00:33:18.284987 1900 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 13 00:33:18.285631 kubelet[1900]: I0513 00:33:18.285017 1900 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 13 00:33:18.285631 kubelet[1900]: I0513 00:33:18.285238 1900 server.go:927] "Client rotation is on, will bootstrap in background" May 13 00:33:18.344914 kubelet[1900]: I0513 00:33:18.344785 1900 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 13 00:33:18.356889 kubelet[1900]: I0513 00:33:18.356838 1900 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 13 00:33:18.358518 kubelet[1900]: I0513 00:33:18.358454 1900 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 13 00:33:18.358728 kubelet[1900]: I0513 00:33:18.358516 1900 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.113","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 13 00:33:18.358848 kubelet[1900]: I0513 00:33:18.358836 1900 topology_manager.go:138] "Creating topology manager with none policy" May 13 00:33:18.358848 kubelet[1900]: I0513 00:33:18.358849 1900 container_manager_linux.go:301] "Creating device plugin manager" May 13 00:33:18.359121 kubelet[1900]: I0513 00:33:18.359095 1900 state_mem.go:36] "Initialized new in-memory state store" May 13 00:33:18.360183 kubelet[1900]: I0513 00:33:18.360141 1900 kubelet.go:400] "Attempting to sync node with API server" May 13 00:33:18.360183 kubelet[1900]: I0513 00:33:18.360165 1900 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 13 00:33:18.360269 kubelet[1900]: I0513 00:33:18.360263 1900 kubelet.go:312] "Adding apiserver pod source" May 13 00:33:18.360931 kubelet[1900]: I0513 00:33:18.360419 1900 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 13 00:33:18.360931 kubelet[1900]: E0513 00:33:18.360655 1900 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:18.360931 kubelet[1900]: E0513 00:33:18.360661 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:18.362073 kubelet[1900]: I0513 00:33:18.362040 1900 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 13 00:33:18.362448 kubelet[1900]: I0513 00:33:18.362424 1900 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 13 00:33:18.362551 kubelet[1900]: W0513 00:33:18.362538 1900 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 13 00:33:18.366227 kubelet[1900]: I0513 00:33:18.363441 1900 server.go:1264] "Started kubelet" May 13 00:33:18.366227 kubelet[1900]: I0513 00:33:18.364245 1900 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 13 00:33:18.366227 kubelet[1900]: I0513 00:33:18.364416 1900 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 13 00:33:18.366227 kubelet[1900]: I0513 00:33:18.364765 1900 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 13 00:33:18.366227 kubelet[1900]: I0513 00:33:18.365380 1900 server.go:455] "Adding debug handlers to kubelet server" May 13 00:33:18.366405 kubelet[1900]: I0513 00:33:18.366305 1900 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 13 00:33:18.376362 kubelet[1900]: W0513 00:33:18.376328 1900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:33:18.376769 kubelet[1900]: E0513 00:33:18.376749 1900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 13 00:33:18.377474 kubelet[1900]: E0513 00:33:18.376936 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:18.377773 kubelet[1900]: W0513 00:33:18.377754 1900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:33:18.377907 kubelet[1900]: E0513 00:33:18.377894 1900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.113" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 13 00:33:18.378035 kubelet[1900]: I0513 00:33:18.377083 1900 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 13 00:33:18.378298 kubelet[1900]: E0513 00:33:18.378075 1900 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.113.183eeeeaf4d0cbc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.113,UID:10.0.0.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.113,},FirstTimestamp:2025-05-13 00:33:18.363413442 +0000 UTC m=+1.022596401,LastTimestamp:2025-05-13 00:33:18.363413442 +0000 UTC m=+1.022596401,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.113,}" May 13 00:33:18.378548 kubelet[1900]: I0513 00:33:18.377056 1900 volume_manager.go:291] "Starting Kubelet Volume Manager" May 13 00:33:18.379071 kubelet[1900]: W0513 00:33:18.379049 1900 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 00:33:18.379438 kubelet[1900]: E0513 00:33:18.379159 1900 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 13 00:33:18.379438 kubelet[1900]: E0513 00:33:18.379295 1900 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.113\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 13 00:33:18.380273 kubelet[1900]: I0513 00:33:18.380259 1900 reconciler.go:26] "Reconciler: start to sync state" May 13 00:33:18.380889 kubelet[1900]: E0513 00:33:18.380763 1900 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 13 00:33:18.381253 kubelet[1900]: I0513 00:33:18.381223 1900 factory.go:221] Registration of the systemd container factory successfully May 13 00:33:18.381332 kubelet[1900]: I0513 00:33:18.381310 1900 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 13 00:33:18.381980 kubelet[1900]: E0513 00:33:18.381904 1900 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.113.183eeeeaf5d94ce2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.113,UID:10.0.0.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.113,},FirstTimestamp:2025-05-13 00:33:18.380748002 +0000 UTC m=+1.039930961,LastTimestamp:2025-05-13 00:33:18.380748002 +0000 UTC m=+1.039930961,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.113,}" May 13 00:33:18.384225 kubelet[1900]: I0513 00:33:18.384204 1900 factory.go:221] Registration of the containerd container factory successfully May 13 00:33:18.404430 kubelet[1900]: I0513 00:33:18.404381 1900 cpu_manager.go:214] "Starting CPU manager" policy="none" May 13 00:33:18.404430 kubelet[1900]: I0513 00:33:18.404401 1900 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 13 00:33:18.404430 kubelet[1900]: I0513 00:33:18.404421 1900 state_mem.go:36] "Initialized new in-memory state store" May 13 00:33:18.412762 kubelet[1900]: E0513 00:33:18.412649 1900 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.113.183eeeeaf738056a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.113,UID:10.0.0.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.113 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.113,},FirstTimestamp:2025-05-13 00:33:18.403732842 +0000 UTC m=+1.062915761,LastTimestamp:2025-05-13 00:33:18.403732842 +0000 UTC m=+1.062915761,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.113,}" May 13 00:33:18.420013 kubelet[1900]: E0513 00:33:18.419814 1900 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.113.183eeeeaf738367a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.113,UID:10.0.0.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.113 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.113,},FirstTimestamp:2025-05-13 00:33:18.403745402 +0000 UTC m=+1.062928361,LastTimestamp:2025-05-13 00:33:18.403745402 +0000 UTC m=+1.062928361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.113,}" May 13 00:33:18.427545 kubelet[1900]: E0513 00:33:18.427447 1900 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.113.183eeeeaf738542a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.113,UID:10.0.0.113,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.113 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.113,},FirstTimestamp:2025-05-13 00:33:18.403753002 +0000 UTC m=+1.062936041,LastTimestamp:2025-05-13 00:33:18.403753002 +0000 UTC m=+1.062936041,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.113,}" May 13 00:33:18.478627 kubelet[1900]: I0513 00:33:18.478368 1900 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.113" May 13 00:33:18.486598 kubelet[1900]: I0513 00:33:18.486483 1900 policy_none.go:49] "None policy: Start" May 13 00:33:18.489445 kubelet[1900]: I0513 00:33:18.489030 1900 memory_manager.go:170] "Starting memorymanager" policy="None" May 13 00:33:18.489445 kubelet[1900]: I0513 00:33:18.489095 1900 state_mem.go:35] "Initializing new in-memory state store" May 13 00:33:18.494486 kubelet[1900]: I0513 00:33:18.494443 1900 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.113" May 13 00:33:18.498775 kubelet[1900]: I0513 00:33:18.498376 1900 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 13 00:33:18.498775 kubelet[1900]: I0513 00:33:18.498587 1900 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 13 00:33:18.498775 kubelet[1900]: I0513 00:33:18.498703 1900 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 13 00:33:18.503366 kubelet[1900]: E0513 00:33:18.503330 1900 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.113\" not found" May 13 00:33:18.514497 kubelet[1900]: I0513 00:33:18.514434 1900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 13 00:33:18.516700 kubelet[1900]: I0513 00:33:18.516668 1900 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 13 00:33:18.516878 kubelet[1900]: E0513 00:33:18.516835 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:18.516933 kubelet[1900]: I0513 00:33:18.516922 1900 status_manager.go:217] "Starting to sync pod status with apiserver" May 13 00:33:18.517041 kubelet[1900]: I0513 00:33:18.517030 1900 kubelet.go:2337] "Starting kubelet main sync loop" May 13 00:33:18.517517 kubelet[1900]: E0513 00:33:18.517489 1900 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 13 00:33:18.617704 kubelet[1900]: E0513 00:33:18.617568 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:18.718148 kubelet[1900]: E0513 00:33:18.718112 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:18.818590 kubelet[1900]: E0513 00:33:18.818556 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:18.855988 sudo[1750]: pam_unix(sudo:session): session closed for user root May 13 00:33:18.857446 sshd[1743]: pam_unix(sshd:session): session closed for user core May 13 00:33:18.860436 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:42644.service: Deactivated successfully. May 13 00:33:18.862536 systemd-logind[1535]: Session 7 logged out. Waiting for processes to exit. May 13 00:33:18.863125 systemd[1]: session-7.scope: Deactivated successfully. May 13 00:33:18.864212 systemd-logind[1535]: Removed session 7. May 13 00:33:18.919065 kubelet[1900]: E0513 00:33:18.919018 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:19.019625 kubelet[1900]: E0513 00:33:19.019554 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:19.120263 kubelet[1900]: E0513 00:33:19.120225 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:19.220939 kubelet[1900]: E0513 00:33:19.220845 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:19.288064 kubelet[1900]: I0513 00:33:19.287863 1900 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 13 00:33:19.288064 kubelet[1900]: W0513 00:33:19.288028 1900 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 13 00:33:19.321272 kubelet[1900]: E0513 00:33:19.321238 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:19.361742 kubelet[1900]: E0513 00:33:19.361679 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:19.421864 kubelet[1900]: E0513 00:33:19.421828 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:19.522937 kubelet[1900]: E0513 00:33:19.522840 1900 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.113\" not found" May 13 00:33:19.624121 kubelet[1900]: I0513 00:33:19.624085 1900 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 13 00:33:19.624398 containerd[1556]: time="2025-05-13T00:33:19.624349722Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 13 00:33:19.624699 kubelet[1900]: I0513 00:33:19.624527 1900 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 13 00:33:20.362050 kubelet[1900]: I0513 00:33:20.362008 1900 apiserver.go:52] "Watching apiserver" May 13 00:33:20.362168 kubelet[1900]: E0513 00:33:20.362019 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:20.367097 kubelet[1900]: I0513 00:33:20.367054 1900 topology_manager.go:215] "Topology Admit Handler" podUID="e966b341-3de5-4e67-9f8b-231e82f2bd6b" podNamespace="calico-system" podName="calico-node-rpmz7" May 13 00:33:20.367181 kubelet[1900]: I0513 00:33:20.367154 1900 topology_manager.go:215] "Topology Admit Handler" podUID="3c0a2164-4e36-4a38-9d6e-97e893941f35" podNamespace="calico-system" podName="csi-node-driver-krtjb" May 13 00:33:20.368240 kubelet[1900]: I0513 00:33:20.367637 1900 topology_manager.go:215] "Topology Admit Handler" podUID="ef4490db-e9e6-4661-b3e2-4255a0634ea6" podNamespace="kube-system" podName="kube-proxy-q2dk9" May 13 00:33:20.368240 kubelet[1900]: E0513 00:33:20.367913 1900 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krtjb" podUID="3c0a2164-4e36-4a38-9d6e-97e893941f35" May 13 00:33:20.379176 kubelet[1900]: I0513 00:33:20.379138 1900 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 13 00:33:20.389410 kubelet[1900]: I0513 00:33:20.389359 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3c0a2164-4e36-4a38-9d6e-97e893941f35-varrun\") pod \"csi-node-driver-krtjb\" (UID: \"3c0a2164-4e36-4a38-9d6e-97e893941f35\") " pod="calico-system/csi-node-driver-krtjb" May 13 00:33:20.389737 kubelet[1900]: I0513 00:33:20.389619 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3c0a2164-4e36-4a38-9d6e-97e893941f35-kubelet-dir\") pod \"csi-node-driver-krtjb\" (UID: \"3c0a2164-4e36-4a38-9d6e-97e893941f35\") " pod="calico-system/csi-node-driver-krtjb" May 13 00:33:20.389737 kubelet[1900]: I0513 00:33:20.389648 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3c0a2164-4e36-4a38-9d6e-97e893941f35-socket-dir\") pod \"csi-node-driver-krtjb\" (UID: \"3c0a2164-4e36-4a38-9d6e-97e893941f35\") " pod="calico-system/csi-node-driver-krtjb" May 13 00:33:20.389737 kubelet[1900]: I0513 00:33:20.389667 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef4490db-e9e6-4661-b3e2-4255a0634ea6-kube-proxy\") pod \"kube-proxy-q2dk9\" (UID: \"ef4490db-e9e6-4661-b3e2-4255a0634ea6\") " pod="kube-system/kube-proxy-q2dk9" May 13 00:33:20.389737 kubelet[1900]: I0513 00:33:20.389683 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e966b341-3de5-4e67-9f8b-231e82f2bd6b-node-certs\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.389737 kubelet[1900]: I0513 00:33:20.389699 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e966b341-3de5-4e67-9f8b-231e82f2bd6b-tigera-ca-bundle\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.389904 kubelet[1900]: I0513 00:33:20.389715 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-net-dir\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.389904 kubelet[1900]: I0513 00:33:20.389766 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-log-dir\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.389904 kubelet[1900]: I0513 00:33:20.389820 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-xtables-lock\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.389904 kubelet[1900]: I0513 00:33:20.389840 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf9mm\" (UniqueName: \"kubernetes.io/projected/3c0a2164-4e36-4a38-9d6e-97e893941f35-kube-api-access-qf9mm\") pod \"csi-node-driver-krtjb\" (UID: \"3c0a2164-4e36-4a38-9d6e-97e893941f35\") " pod="calico-system/csi-node-driver-krtjb" May 13 00:33:20.389904 kubelet[1900]: I0513 00:33:20.389861 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef4490db-e9e6-4661-b3e2-4255a0634ea6-lib-modules\") pod \"kube-proxy-q2dk9\" (UID: \"ef4490db-e9e6-4661-b3e2-4255a0634ea6\") " pod="kube-system/kube-proxy-q2dk9" May 13 00:33:20.389999 kubelet[1900]: I0513 00:33:20.389886 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hv4c\" (UniqueName: \"kubernetes.io/projected/ef4490db-e9e6-4661-b3e2-4255a0634ea6-kube-api-access-4hv4c\") pod \"kube-proxy-q2dk9\" (UID: \"ef4490db-e9e6-4661-b3e2-4255a0634ea6\") " pod="kube-system/kube-proxy-q2dk9" May 13 00:33:20.389999 kubelet[1900]: I0513 00:33:20.389903 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-var-run-calico\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.389999 kubelet[1900]: I0513 00:33:20.389918 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-policysync\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.389999 kubelet[1900]: I0513 00:33:20.389962 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-var-lib-calico\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.389999 kubelet[1900]: I0513 00:33:20.389978 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-bin-dir\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.390087 kubelet[1900]: I0513 00:33:20.389997 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-flexvol-driver-host\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.390087 kubelet[1900]: I0513 00:33:20.390016 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zghsh\" (UniqueName: \"kubernetes.io/projected/e966b341-3de5-4e67-9f8b-231e82f2bd6b-kube-api-access-zghsh\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.390087 kubelet[1900]: I0513 00:33:20.390033 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3c0a2164-4e36-4a38-9d6e-97e893941f35-registration-dir\") pod \"csi-node-driver-krtjb\" (UID: \"3c0a2164-4e36-4a38-9d6e-97e893941f35\") " pod="calico-system/csi-node-driver-krtjb" May 13 00:33:20.390087 kubelet[1900]: I0513 00:33:20.390048 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef4490db-e9e6-4661-b3e2-4255a0634ea6-xtables-lock\") pod \"kube-proxy-q2dk9\" (UID: \"ef4490db-e9e6-4661-b3e2-4255a0634ea6\") " pod="kube-system/kube-proxy-q2dk9" May 13 00:33:20.390087 kubelet[1900]: I0513 00:33:20.390063 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-lib-modules\") pod \"calico-node-rpmz7\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " pod="calico-system/calico-node-rpmz7" May 13 00:33:20.491906 kubelet[1900]: E0513 00:33:20.491879 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.491906 kubelet[1900]: W0513 00:33:20.491900 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.492366 kubelet[1900]: E0513 00:33:20.491926 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.492366 kubelet[1900]: E0513 00:33:20.492128 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.492366 kubelet[1900]: W0513 00:33:20.492136 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.492366 kubelet[1900]: E0513 00:33:20.492151 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.492366 kubelet[1900]: E0513 00:33:20.492357 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.492366 kubelet[1900]: W0513 00:33:20.492367 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.492511 kubelet[1900]: E0513 00:33:20.492383 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.492579 kubelet[1900]: E0513 00:33:20.492565 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.492579 kubelet[1900]: W0513 00:33:20.492577 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.492636 kubelet[1900]: E0513 00:33:20.492589 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.492770 kubelet[1900]: E0513 00:33:20.492760 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.492796 kubelet[1900]: W0513 00:33:20.492772 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.492796 kubelet[1900]: E0513 00:33:20.492785 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.492986 kubelet[1900]: E0513 00:33:20.492975 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.492986 kubelet[1900]: W0513 00:33:20.492986 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.493038 kubelet[1900]: E0513 00:33:20.493000 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.493174 kubelet[1900]: E0513 00:33:20.493164 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.493174 kubelet[1900]: W0513 00:33:20.493173 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.493249 kubelet[1900]: E0513 00:33:20.493199 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.493311 kubelet[1900]: E0513 00:33:20.493300 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.493311 kubelet[1900]: W0513 00:33:20.493309 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.493357 kubelet[1900]: E0513 00:33:20.493330 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.493454 kubelet[1900]: E0513 00:33:20.493444 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.493454 kubelet[1900]: W0513 00:33:20.493453 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.493589 kubelet[1900]: E0513 00:33:20.493493 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.493589 kubelet[1900]: E0513 00:33:20.493580 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.493589 kubelet[1900]: W0513 00:33:20.493587 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.493664 kubelet[1900]: E0513 00:33:20.493652 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.493789 kubelet[1900]: E0513 00:33:20.493779 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.493789 kubelet[1900]: W0513 00:33:20.493788 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.493837 kubelet[1900]: E0513 00:33:20.493829 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.493970 kubelet[1900]: E0513 00:33:20.493958 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.494047 kubelet[1900]: W0513 00:33:20.493970 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.494047 kubelet[1900]: E0513 00:33:20.494013 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.495578 kubelet[1900]: E0513 00:33:20.495552 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.495578 kubelet[1900]: W0513 00:33:20.495567 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.495652 kubelet[1900]: E0513 00:33:20.495641 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.495788 kubelet[1900]: E0513 00:33:20.495777 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.495788 kubelet[1900]: W0513 00:33:20.495788 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.495894 kubelet[1900]: E0513 00:33:20.495860 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.496009 kubelet[1900]: E0513 00:33:20.495951 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.496009 kubelet[1900]: W0513 00:33:20.495964 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.496009 kubelet[1900]: E0513 00:33:20.496001 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.496135 kubelet[1900]: E0513 00:33:20.496127 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.496135 kubelet[1900]: W0513 00:33:20.496134 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.496257 kubelet[1900]: E0513 00:33:20.496199 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.496315 kubelet[1900]: E0513 00:33:20.496302 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.496315 kubelet[1900]: W0513 00:33:20.496315 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.496371 kubelet[1900]: E0513 00:33:20.496339 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.496510 kubelet[1900]: E0513 00:33:20.496499 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.496510 kubelet[1900]: W0513 00:33:20.496509 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.496558 kubelet[1900]: E0513 00:33:20.496544 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.496682 kubelet[1900]: E0513 00:33:20.496670 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.496682 kubelet[1900]: W0513 00:33:20.496681 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.496769 kubelet[1900]: E0513 00:33:20.496748 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.496812 kubelet[1900]: E0513 00:33:20.496805 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.496848 kubelet[1900]: W0513 00:33:20.496811 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.496848 kubelet[1900]: E0513 00:33:20.496831 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.496943 kubelet[1900]: E0513 00:33:20.496932 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.496943 kubelet[1900]: W0513 00:33:20.496941 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.496984 kubelet[1900]: E0513 00:33:20.496963 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.497120 kubelet[1900]: E0513 00:33:20.497101 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.497120 kubelet[1900]: W0513 00:33:20.497113 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.497120 kubelet[1900]: E0513 00:33:20.497140 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.497278 kubelet[1900]: E0513 00:33:20.497267 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.497278 kubelet[1900]: W0513 00:33:20.497277 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.497355 kubelet[1900]: E0513 00:33:20.497301 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.497412 kubelet[1900]: E0513 00:33:20.497401 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.497412 kubelet[1900]: W0513 00:33:20.497409 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.497478 kubelet[1900]: E0513 00:33:20.497451 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.497545 kubelet[1900]: E0513 00:33:20.497535 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.497545 kubelet[1900]: W0513 00:33:20.497544 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.497591 kubelet[1900]: E0513 00:33:20.497564 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.497690 kubelet[1900]: E0513 00:33:20.497681 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.497690 kubelet[1900]: W0513 00:33:20.497689 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.497743 kubelet[1900]: E0513 00:33:20.497716 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.497824 kubelet[1900]: E0513 00:33:20.497815 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.497824 kubelet[1900]: W0513 00:33:20.497824 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.497871 kubelet[1900]: E0513 00:33:20.497845 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.497989 kubelet[1900]: E0513 00:33:20.497979 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.497989 kubelet[1900]: W0513 00:33:20.497989 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.498040 kubelet[1900]: E0513 00:33:20.498012 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.498264 kubelet[1900]: E0513 00:33:20.498251 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.498292 kubelet[1900]: W0513 00:33:20.498264 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.498329 kubelet[1900]: E0513 00:33:20.498310 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.498421 kubelet[1900]: E0513 00:33:20.498411 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.498421 kubelet[1900]: W0513 00:33:20.498420 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.498474 kubelet[1900]: E0513 00:33:20.498441 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.498556 kubelet[1900]: E0513 00:33:20.498545 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.498556 kubelet[1900]: W0513 00:33:20.498556 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.498615 kubelet[1900]: E0513 00:33:20.498575 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.498699 kubelet[1900]: E0513 00:33:20.498689 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.498699 kubelet[1900]: W0513 00:33:20.498699 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.498751 kubelet[1900]: E0513 00:33:20.498717 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.498827 kubelet[1900]: E0513 00:33:20.498818 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.498827 kubelet[1900]: W0513 00:33:20.498827 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.498878 kubelet[1900]: E0513 00:33:20.498849 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.498950 kubelet[1900]: E0513 00:33:20.498941 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.498950 kubelet[1900]: W0513 00:33:20.498950 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.498996 kubelet[1900]: E0513 00:33:20.498968 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.499092 kubelet[1900]: E0513 00:33:20.499083 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.499122 kubelet[1900]: W0513 00:33:20.499096 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.499122 kubelet[1900]: E0513 00:33:20.499115 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.499237 kubelet[1900]: E0513 00:33:20.499227 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.499237 kubelet[1900]: W0513 00:33:20.499237 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.499289 kubelet[1900]: E0513 00:33:20.499254 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.499360 kubelet[1900]: E0513 00:33:20.499351 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.499360 kubelet[1900]: W0513 00:33:20.499359 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.499407 kubelet[1900]: E0513 00:33:20.499372 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.499651 kubelet[1900]: E0513 00:33:20.499635 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.499679 kubelet[1900]: W0513 00:33:20.499652 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.499679 kubelet[1900]: E0513 00:33:20.499670 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.499845 kubelet[1900]: E0513 00:33:20.499835 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.499882 kubelet[1900]: W0513 00:33:20.499846 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.499882 kubelet[1900]: E0513 00:33:20.499858 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.500055 kubelet[1900]: E0513 00:33:20.500040 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.500084 kubelet[1900]: W0513 00:33:20.500056 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.500136 kubelet[1900]: E0513 00:33:20.500087 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.500275 kubelet[1900]: E0513 00:33:20.500263 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.500307 kubelet[1900]: W0513 00:33:20.500275 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.500307 kubelet[1900]: E0513 00:33:20.500301 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.500446 kubelet[1900]: E0513 00:33:20.500436 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.500480 kubelet[1900]: W0513 00:33:20.500447 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.500480 kubelet[1900]: E0513 00:33:20.500475 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.501630 kubelet[1900]: E0513 00:33:20.500620 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.501630 kubelet[1900]: W0513 00:33:20.500632 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.501630 kubelet[1900]: E0513 00:33:20.500643 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.501630 kubelet[1900]: E0513 00:33:20.501408 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.501630 kubelet[1900]: W0513 00:33:20.501423 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.501630 kubelet[1900]: E0513 00:33:20.501436 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.509512 kubelet[1900]: E0513 00:33:20.509480 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.509512 kubelet[1900]: W0513 00:33:20.509502 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.509649 kubelet[1900]: E0513 00:33:20.509529 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.509819 kubelet[1900]: E0513 00:33:20.509792 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.509819 kubelet[1900]: W0513 00:33:20.509809 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.509819 kubelet[1900]: E0513 00:33:20.509819 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.517636 kubelet[1900]: E0513 00:33:20.517536 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.517636 kubelet[1900]: W0513 00:33:20.517561 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.517636 kubelet[1900]: E0513 00:33:20.517581 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.518236 kubelet[1900]: E0513 00:33:20.518175 1900 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 13 00:33:20.518236 kubelet[1900]: W0513 00:33:20.518191 1900 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 13 00:33:20.518236 kubelet[1900]: E0513 00:33:20.518204 1900 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 13 00:33:20.669753 kubelet[1900]: E0513 00:33:20.669725 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:20.670179 kubelet[1900]: E0513 00:33:20.670146 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:20.670528 containerd[1556]: time="2025-05-13T00:33:20.670495002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q2dk9,Uid:ef4490db-e9e6-4661-b3e2-4255a0634ea6,Namespace:kube-system,Attempt:0,}" May 13 00:33:20.671004 containerd[1556]: time="2025-05-13T00:33:20.670502482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rpmz7,Uid:e966b341-3de5-4e67-9f8b-231e82f2bd6b,Namespace:calico-system,Attempt:0,}" May 13 00:33:21.240664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866772885.mount: Deactivated successfully. May 13 00:33:21.249455 containerd[1556]: time="2025-05-13T00:33:21.249380882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:33:21.250385 containerd[1556]: time="2025-05-13T00:33:21.250346522Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:33:21.250789 containerd[1556]: time="2025-05-13T00:33:21.250758402Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 13 00:33:21.251371 containerd[1556]: time="2025-05-13T00:33:21.251346642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 13 00:33:21.251972 containerd[1556]: time="2025-05-13T00:33:21.251887082Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:33:21.255446 containerd[1556]: time="2025-05-13T00:33:21.255407842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 13 00:33:21.256788 containerd[1556]: time="2025-05-13T00:33:21.256522362Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 585.50936ms" May 13 00:33:21.257187 containerd[1556]: time="2025-05-13T00:33:21.257161602Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 586.57956ms" May 13 00:33:21.363013 kubelet[1900]: E0513 00:33:21.362945 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:21.366689 containerd[1556]: time="2025-05-13T00:33:21.364364562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:21.366689 containerd[1556]: time="2025-05-13T00:33:21.366612802Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:21.366689 containerd[1556]: time="2025-05-13T00:33:21.366630442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:21.366689 containerd[1556]: time="2025-05-13T00:33:21.366592002Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:21.366689 containerd[1556]: time="2025-05-13T00:33:21.366655682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:21.366924 containerd[1556]: time="2025-05-13T00:33:21.366736362Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:21.366924 containerd[1556]: time="2025-05-13T00:33:21.366672122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:21.366924 containerd[1556]: time="2025-05-13T00:33:21.366754682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:21.471429 containerd[1556]: time="2025-05-13T00:33:21.471392962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rpmz7,Uid:e966b341-3de5-4e67-9f8b-231e82f2bd6b,Namespace:calico-system,Attempt:0,} returns sandbox id \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\"" May 13 00:33:21.474195 kubelet[1900]: E0513 00:33:21.473389 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:21.474581 containerd[1556]: time="2025-05-13T00:33:21.472825402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q2dk9,Uid:ef4490db-e9e6-4661-b3e2-4255a0634ea6,Namespace:kube-system,Attempt:0,} returns sandbox id \"12e50ce96e02d4924b4186d27be6ece700b9ff161425e8e53ed1413f3950ea48\"" May 13 00:33:21.475479 containerd[1556]: time="2025-05-13T00:33:21.475436962Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" May 13 00:33:21.476039 kubelet[1900]: E0513 00:33:21.475918 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:21.518002 kubelet[1900]: E0513 00:33:21.517890 1900 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krtjb" podUID="3c0a2164-4e36-4a38-9d6e-97e893941f35" May 13 00:33:22.314480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3375462236.mount: Deactivated successfully. May 13 00:33:22.364110 kubelet[1900]: E0513 00:33:22.364059 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:22.401279 containerd[1556]: time="2025-05-13T00:33:22.401229002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:22.402518 containerd[1556]: time="2025-05-13T00:33:22.402467162Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6492223" May 13 00:33:22.408290 containerd[1556]: time="2025-05-13T00:33:22.407214882Z" level=info msg="ImageCreate event name:\"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:22.413900 containerd[1556]: time="2025-05-13T00:33:22.413841602Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:22.414860 containerd[1556]: time="2025-05-13T00:33:22.414817282Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6492045\" in 939.23492ms" May 13 00:33:22.414924 containerd[1556]: time="2025-05-13T00:33:22.414867842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:dd8e710a588cc6f5834c4d84f7e12458efae593d3dfe527ca9e757c89239ecb8\"" May 13 00:33:22.417071 containerd[1556]: time="2025-05-13T00:33:22.417042602Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 00:33:22.418399 containerd[1556]: time="2025-05-13T00:33:22.418351642Z" level=info msg="CreateContainer within sandbox \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:33:22.438610 containerd[1556]: time="2025-05-13T00:33:22.438419002Z" level=info msg="CreateContainer within sandbox \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\"" May 13 00:33:22.439725 containerd[1556]: time="2025-05-13T00:33:22.439180242Z" level=info msg="StartContainer for \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\"" May 13 00:33:22.488115 containerd[1556]: time="2025-05-13T00:33:22.487205242Z" level=info msg="StartContainer for \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\" returns successfully" May 13 00:33:22.528623 kubelet[1900]: E0513 00:33:22.527689 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:22.533579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75-rootfs.mount: Deactivated successfully. May 13 00:33:22.561853 containerd[1556]: time="2025-05-13T00:33:22.561790482Z" level=info msg="shim disconnected" id=0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75 namespace=k8s.io May 13 00:33:22.561853 containerd[1556]: time="2025-05-13T00:33:22.561849442Z" level=warning msg="cleaning up after shim disconnected" id=0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75 namespace=k8s.io May 13 00:33:22.561853 containerd[1556]: time="2025-05-13T00:33:22.561857642Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:33:23.364675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3339788077.mount: Deactivated successfully. May 13 00:33:23.365055 kubelet[1900]: E0513 00:33:23.364738 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:23.518055 kubelet[1900]: E0513 00:33:23.517666 1900 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krtjb" podUID="3c0a2164-4e36-4a38-9d6e-97e893941f35" May 13 00:33:23.529901 kubelet[1900]: E0513 00:33:23.529872 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:23.586474 containerd[1556]: time="2025-05-13T00:33:23.585742082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:23.586474 containerd[1556]: time="2025-05-13T00:33:23.586433962Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=25775707" May 13 00:33:23.587116 containerd[1556]: time="2025-05-13T00:33:23.587085482Z" level=info msg="ImageCreate event name:\"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:23.590722 containerd[1556]: time="2025-05-13T00:33:23.589645762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:23.590722 containerd[1556]: time="2025-05-13T00:33:23.590319762Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"25774724\" in 1.17302448s" May 13 00:33:23.590722 containerd[1556]: time="2025-05-13T00:33:23.590346242Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 13 00:33:23.592313 containerd[1556]: time="2025-05-13T00:33:23.592282162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" May 13 00:33:23.593114 containerd[1556]: time="2025-05-13T00:33:23.593071722Z" level=info msg="CreateContainer within sandbox \"12e50ce96e02d4924b4186d27be6ece700b9ff161425e8e53ed1413f3950ea48\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 13 00:33:23.608146 containerd[1556]: time="2025-05-13T00:33:23.608110602Z" level=info msg="CreateContainer within sandbox \"12e50ce96e02d4924b4186d27be6ece700b9ff161425e8e53ed1413f3950ea48\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6e6abdaa5af6d1e74e88b2b1002c68ddaa5173f996c26b104a76c322a7831d5b\"" May 13 00:33:23.608514 containerd[1556]: time="2025-05-13T00:33:23.608463802Z" level=info msg="StartContainer for \"6e6abdaa5af6d1e74e88b2b1002c68ddaa5173f996c26b104a76c322a7831d5b\"" May 13 00:33:23.657434 containerd[1556]: time="2025-05-13T00:33:23.657393122Z" level=info msg="StartContainer for \"6e6abdaa5af6d1e74e88b2b1002c68ddaa5173f996c26b104a76c322a7831d5b\" returns successfully" May 13 00:33:24.365576 kubelet[1900]: E0513 00:33:24.365523 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:24.534659 kubelet[1900]: E0513 00:33:24.534248 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:24.544070 kubelet[1900]: I0513 00:33:24.544003 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q2dk9" podStartSLOduration=4.429176642 podStartE2EDuration="6.543986562s" podCreationTimestamp="2025-05-13 00:33:18 +0000 UTC" firstStartedPulling="2025-05-13 00:33:21.476695322 +0000 UTC m=+4.135878281" lastFinishedPulling="2025-05-13 00:33:23.591505242 +0000 UTC m=+6.250688201" observedRunningTime="2025-05-13 00:33:24.543813962 +0000 UTC m=+7.202996921" watchObservedRunningTime="2025-05-13 00:33:24.543986562 +0000 UTC m=+7.203169521" May 13 00:33:25.366421 kubelet[1900]: E0513 00:33:25.366381 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:25.518415 kubelet[1900]: E0513 00:33:25.518367 1900 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-krtjb" podUID="3c0a2164-4e36-4a38-9d6e-97e893941f35" May 13 00:33:25.536460 kubelet[1900]: E0513 00:33:25.536414 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:25.613420 containerd[1556]: time="2025-05-13T00:33:25.612797762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:25.613800 containerd[1556]: time="2025-05-13T00:33:25.613626282Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=91256270" May 13 00:33:25.617417 containerd[1556]: time="2025-05-13T00:33:25.617213962Z" level=info msg="ImageCreate event name:\"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:25.620383 containerd[1556]: time="2025-05-13T00:33:25.620181482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:25.620873 containerd[1556]: time="2025-05-13T00:33:25.620851202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"92625452\" in 2.02786416s" May 13 00:33:25.620919 containerd[1556]: time="2025-05-13T00:33:25.620879362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:add6372545fb406bb017769f222d84c50549ce13e3b19f1fbaee3d8a4aaef627\"" May 13 00:33:25.623369 containerd[1556]: time="2025-05-13T00:33:25.623199242Z" level=info msg="CreateContainer within sandbox \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:33:25.632674 containerd[1556]: time="2025-05-13T00:33:25.632639122Z" level=info msg="CreateContainer within sandbox \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\"" May 13 00:33:25.634487 containerd[1556]: time="2025-05-13T00:33:25.633199162Z" level=info msg="StartContainer for \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\"" May 13 00:33:25.652574 systemd[1]: run-containerd-runc-k8s.io-2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03-runc.L1LGH1.mount: Deactivated successfully. May 13 00:33:25.692617 containerd[1556]: time="2025-05-13T00:33:25.692178282Z" level=info msg="StartContainer for \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\" returns successfully" May 13 00:33:26.261960 containerd[1556]: time="2025-05-13T00:33:26.261808042Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 00:33:26.277917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03-rootfs.mount: Deactivated successfully. May 13 00:33:26.352170 kubelet[1900]: I0513 00:33:26.351973 1900 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 13 00:33:26.367387 kubelet[1900]: E0513 00:33:26.367355 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:26.392028 containerd[1556]: time="2025-05-13T00:33:26.391815802Z" level=info msg="shim disconnected" id=2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03 namespace=k8s.io May 13 00:33:26.392028 containerd[1556]: time="2025-05-13T00:33:26.391869202Z" level=warning msg="cleaning up after shim disconnected" id=2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03 namespace=k8s.io May 13 00:33:26.392028 containerd[1556]: time="2025-05-13T00:33:26.391877322Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:33:26.539917 kubelet[1900]: E0513 00:33:26.539780 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:26.541633 containerd[1556]: time="2025-05-13T00:33:26.541374082Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" May 13 00:33:27.368267 kubelet[1900]: E0513 00:33:27.368206 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:27.521296 containerd[1556]: time="2025-05-13T00:33:27.521247962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krtjb,Uid:3c0a2164-4e36-4a38-9d6e-97e893941f35,Namespace:calico-system,Attempt:0,}" May 13 00:33:27.670721 containerd[1556]: time="2025-05-13T00:33:27.670200922Z" level=error msg="Failed to destroy network for sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:33:27.670721 containerd[1556]: time="2025-05-13T00:33:27.670585562Z" level=error msg="encountered an error cleaning up failed sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:33:27.670721 containerd[1556]: time="2025-05-13T00:33:27.670641082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krtjb,Uid:3c0a2164-4e36-4a38-9d6e-97e893941f35,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:33:27.671617 kubelet[1900]: E0513 00:33:27.671505 1900 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:33:27.671617 kubelet[1900]: E0513 00:33:27.671584 1900 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krtjb" May 13 00:33:27.671617 kubelet[1900]: E0513 00:33:27.671615 1900 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-krtjb" May 13 00:33:27.671927 kubelet[1900]: E0513 00:33:27.671653 1900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-krtjb_calico-system(3c0a2164-4e36-4a38-9d6e-97e893941f35)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-krtjb_calico-system(3c0a2164-4e36-4a38-9d6e-97e893941f35)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-krtjb" podUID="3c0a2164-4e36-4a38-9d6e-97e893941f35" May 13 00:33:27.672658 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1-shm.mount: Deactivated successfully. May 13 00:33:28.369311 kubelet[1900]: E0513 00:33:28.369165 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:28.548624 kubelet[1900]: I0513 00:33:28.547903 1900 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" May 13 00:33:28.548797 containerd[1556]: time="2025-05-13T00:33:28.548465642Z" level=info msg="StopPodSandbox for \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\"" May 13 00:33:28.548797 containerd[1556]: time="2025-05-13T00:33:28.548643682Z" level=info msg="Ensure that sandbox 14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1 in task-service has been cleanup successfully" May 13 00:33:28.579328 containerd[1556]: time="2025-05-13T00:33:28.579276962Z" level=error msg="StopPodSandbox for \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\" failed" error="failed to destroy network for sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 13 00:33:28.579918 kubelet[1900]: E0513 00:33:28.579690 1900 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" May 13 00:33:28.579918 kubelet[1900]: E0513 00:33:28.579748 1900 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1"} May 13 00:33:28.579918 kubelet[1900]: E0513 00:33:28.579806 1900 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c0a2164-4e36-4a38-9d6e-97e893941f35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 13 00:33:28.579918 kubelet[1900]: E0513 00:33:28.579828 1900 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c0a2164-4e36-4a38-9d6e-97e893941f35\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-krtjb" podUID="3c0a2164-4e36-4a38-9d6e-97e893941f35" May 13 00:33:29.370435 kubelet[1900]: E0513 00:33:29.370390 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:29.542565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2596490509.mount: Deactivated successfully. May 13 00:33:29.797710 containerd[1556]: time="2025-05-13T00:33:29.797175042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:29.797710 containerd[1556]: time="2025-05-13T00:33:29.797903002Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=138981893" May 13 00:33:29.798715 containerd[1556]: time="2025-05-13T00:33:29.798663402Z" level=info msg="ImageCreate event name:\"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:29.800382 containerd[1556]: time="2025-05-13T00:33:29.800331522Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:29.801298 containerd[1556]: time="2025-05-13T00:33:29.801043562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"138981755\" in 3.25962932s" May 13 00:33:29.801298 containerd[1556]: time="2025-05-13T00:33:29.801078482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:cdcce3ec4624a24c28cdc07b0ee29ddf6703628edee7452a3f8a8b4816bfd057\"" May 13 00:33:29.809899 containerd[1556]: time="2025-05-13T00:33:29.809864322Z" level=info msg="CreateContainer within sandbox \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:33:29.823512 containerd[1556]: time="2025-05-13T00:33:29.823460802Z" level=info msg="CreateContainer within sandbox \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\"" May 13 00:33:29.824638 containerd[1556]: time="2025-05-13T00:33:29.824020402Z" level=info msg="StartContainer for \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\"" May 13 00:33:29.875690 containerd[1556]: time="2025-05-13T00:33:29.875635482Z" level=info msg="StartContainer for \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\" returns successfully" May 13 00:33:29.884747 kubelet[1900]: I0513 00:33:29.884696 1900 topology_manager.go:215] "Topology Admit Handler" podUID="f5eded9d-1137-4e9d-9718-566b27c72aff" podNamespace="default" podName="nginx-deployment-85f456d6dd-ddgrp" May 13 00:33:29.891646 kubelet[1900]: W0513 00:33:29.891218 1900 reflector.go:547] object-"default"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.113" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.0.0.113' and this object May 13 00:33:29.891646 kubelet[1900]: E0513 00:33:29.891260 1900 reflector.go:150] object-"default"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:10.0.0.113" cannot list resource "configmaps" in API group "" in the namespace "default": no relationship found between node '10.0.0.113' and this object May 13 00:33:30.024391 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 13 00:33:30.024521 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 13 00:33:30.049901 kubelet[1900]: I0513 00:33:30.049797 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8b2x\" (UniqueName: \"kubernetes.io/projected/f5eded9d-1137-4e9d-9718-566b27c72aff-kube-api-access-s8b2x\") pod \"nginx-deployment-85f456d6dd-ddgrp\" (UID: \"f5eded9d-1137-4e9d-9718-566b27c72aff\") " pod="default/nginx-deployment-85f456d6dd-ddgrp" May 13 00:33:30.371446 kubelet[1900]: E0513 00:33:30.371399 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:30.555806 kubelet[1900]: E0513 00:33:30.555370 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:30.571459 kubelet[1900]: I0513 00:33:30.570732 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rpmz7" podStartSLOduration=4.243840602 podStartE2EDuration="12.570715762s" podCreationTimestamp="2025-05-13 00:33:18 +0000 UTC" firstStartedPulling="2025-05-13 00:33:21.475030882 +0000 UTC m=+4.134213841" lastFinishedPulling="2025-05-13 00:33:29.801906042 +0000 UTC m=+12.461089001" observedRunningTime="2025-05-13 00:33:30.570666042 +0000 UTC m=+13.229849001" watchObservedRunningTime="2025-05-13 00:33:30.570715762 +0000 UTC m=+13.229898721" May 13 00:33:31.162721 kubelet[1900]: E0513 00:33:31.162672 1900 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition May 13 00:33:31.162721 kubelet[1900]: E0513 00:33:31.162713 1900 projected.go:200] Error preparing data for projected volume kube-api-access-s8b2x for pod default/nginx-deployment-85f456d6dd-ddgrp: failed to sync configmap cache: timed out waiting for the condition May 13 00:33:31.162881 kubelet[1900]: E0513 00:33:31.162774 1900 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f5eded9d-1137-4e9d-9718-566b27c72aff-kube-api-access-s8b2x podName:f5eded9d-1137-4e9d-9718-566b27c72aff nodeName:}" failed. No retries permitted until 2025-05-13 00:33:31.662754562 +0000 UTC m=+14.321937521 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s8b2x" (UniqueName: "kubernetes.io/projected/f5eded9d-1137-4e9d-9718-566b27c72aff-kube-api-access-s8b2x") pod "nginx-deployment-85f456d6dd-ddgrp" (UID: "f5eded9d-1137-4e9d-9718-566b27c72aff") : failed to sync configmap cache: timed out waiting for the condition May 13 00:33:31.372376 kubelet[1900]: E0513 00:33:31.372336 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:31.520664 kernel: bpftool[2651]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 13 00:33:31.559900 kubelet[1900]: I0513 00:33:31.559004 1900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:33:31.560185 kubelet[1900]: E0513 00:33:31.560166 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:31.666787 systemd-networkd[1237]: vxlan.calico: Link UP May 13 00:33:31.666793 systemd-networkd[1237]: vxlan.calico: Gained carrier May 13 00:33:31.689248 containerd[1556]: time="2025-05-13T00:33:31.688385562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ddgrp,Uid:f5eded9d-1137-4e9d-9718-566b27c72aff,Namespace:default,Attempt:0,}" May 13 00:33:31.912912 systemd-networkd[1237]: cali032f654c249: Link UP May 13 00:33:31.913075 systemd-networkd[1237]: cali032f654c249: Gained carrier May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.745 [INFO][2694] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0 nginx-deployment-85f456d6dd- default f5eded9d-1137-4e9d-9718-566b27c72aff 925 0 2025-05-13 00:33:29 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.113 nginx-deployment-85f456d6dd-ddgrp eth0 default [] [] [kns.default ksa.default.default] cali032f654c249 [] []}} ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Namespace="default" Pod="nginx-deployment-85f456d6dd-ddgrp" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.745 [INFO][2694] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Namespace="default" Pod="nginx-deployment-85f456d6dd-ddgrp" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.836 [INFO][2706] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" HandleID="k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Workload="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.861 [INFO][2706] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" HandleID="k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Workload="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000284350), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.113", "pod":"nginx-deployment-85f456d6dd-ddgrp", "timestamp":"2025-05-13 00:33:31.836911322 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.861 [INFO][2706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.861 [INFO][2706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.861 [INFO][2706] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.866 [INFO][2706] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.873 [INFO][2706] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.883 [INFO][2706] ipam/ipam.go 489: Trying affinity for 192.168.6.192/26 host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.888 [INFO][2706] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.891 [INFO][2706] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.891 [INFO][2706] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.893 [INFO][2706] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18 May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.897 [INFO][2706] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.907 [INFO][2706] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.193/26] block=192.168.6.192/26 handle="k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.907 [INFO][2706] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.193/26] handle="k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" host="10.0.0.113" May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.907 [INFO][2706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:33:31.919261 containerd[1556]: 2025-05-13 00:33:31.907 [INFO][2706] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.193/26] IPv6=[] ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" HandleID="k8s-pod-network.bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Workload="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" May 13 00:33:31.919845 containerd[1556]: 2025-05-13 00:33:31.909 [INFO][2694] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Namespace="default" Pod="nginx-deployment-85f456d6dd-ddgrp" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"f5eded9d-1137-4e9d-9718-566b27c72aff", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-ddgrp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali032f654c249", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:31.919845 containerd[1556]: 2025-05-13 00:33:31.909 [INFO][2694] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.193/32] ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Namespace="default" Pod="nginx-deployment-85f456d6dd-ddgrp" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" May 13 00:33:31.919845 containerd[1556]: 2025-05-13 00:33:31.910 [INFO][2694] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali032f654c249 ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Namespace="default" Pod="nginx-deployment-85f456d6dd-ddgrp" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" May 13 00:33:31.919845 containerd[1556]: 2025-05-13 00:33:31.911 [INFO][2694] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Namespace="default" Pod="nginx-deployment-85f456d6dd-ddgrp" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" May 13 00:33:31.919845 containerd[1556]: 2025-05-13 00:33:31.912 [INFO][2694] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Namespace="default" Pod="nginx-deployment-85f456d6dd-ddgrp" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"f5eded9d-1137-4e9d-9718-566b27c72aff", ResourceVersion:"925", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18", Pod:"nginx-deployment-85f456d6dd-ddgrp", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali032f654c249", MAC:"32:5a:a6:f7:cd:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:31.919845 containerd[1556]: 2025-05-13 00:33:31.917 [INFO][2694] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18" Namespace="default" Pod="nginx-deployment-85f456d6dd-ddgrp" WorkloadEndpoint="10.0.0.113-k8s-nginx--deployment--85f456d6dd--ddgrp-eth0" May 13 00:33:31.935102 containerd[1556]: time="2025-05-13T00:33:31.934792242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:31.935102 containerd[1556]: time="2025-05-13T00:33:31.934861842Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:31.935102 containerd[1556]: time="2025-05-13T00:33:31.934872962Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:31.935102 containerd[1556]: time="2025-05-13T00:33:31.934952442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:31.956974 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:33:31.973588 containerd[1556]: time="2025-05-13T00:33:31.973551562Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-ddgrp,Uid:f5eded9d-1137-4e9d-9718-566b27c72aff,Namespace:default,Attempt:0,} returns sandbox id \"bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18\"" May 13 00:33:31.975080 containerd[1556]: time="2025-05-13T00:33:31.975056362Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:33:32.373048 kubelet[1900]: E0513 00:33:32.373012 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:33.070866 systemd-networkd[1237]: cali032f654c249: Gained IPv6LL May 13 00:33:33.373828 kubelet[1900]: E0513 00:33:33.373796 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:33.581849 systemd-networkd[1237]: vxlan.calico: Gained IPv6LL May 13 00:33:33.746386 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1390348837.mount: Deactivated successfully. May 13 00:33:34.374882 kubelet[1900]: E0513 00:33:34.374834 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:34.548483 containerd[1556]: time="2025-05-13T00:33:34.548436482Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:34.548946 containerd[1556]: time="2025-05-13T00:33:34.548910922Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=69948859" May 13 00:33:34.549842 containerd[1556]: time="2025-05-13T00:33:34.549810322Z" level=info msg="ImageCreate event name:\"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:34.554782 containerd[1556]: time="2025-05-13T00:33:34.554755442Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 2.57966272s" May 13 00:33:34.555063 containerd[1556]: time="2025-05-13T00:33:34.554876362Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:33:34.556525 containerd[1556]: time="2025-05-13T00:33:34.556478842Z" level=info msg="CreateContainer within sandbox \"bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 13 00:33:34.558646 containerd[1556]: time="2025-05-13T00:33:34.558593042Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:34.565789 containerd[1556]: time="2025-05-13T00:33:34.565745202Z" level=info msg="CreateContainer within sandbox \"bcfbeda157f3f9a938529ded8cf615ddb3738aee79b954d9c1bb01126b235d18\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"15a0fbd81d4b87d2ed29432c6ee4ca21b7267843ae134e915ee58a9cd54cdb7b\"" May 13 00:33:34.566300 containerd[1556]: time="2025-05-13T00:33:34.566258402Z" level=info msg="StartContainer for \"15a0fbd81d4b87d2ed29432c6ee4ca21b7267843ae134e915ee58a9cd54cdb7b\"" May 13 00:33:34.688997 containerd[1556]: time="2025-05-13T00:33:34.688857362Z" level=info msg="StartContainer for \"15a0fbd81d4b87d2ed29432c6ee4ca21b7267843ae134e915ee58a9cd54cdb7b\" returns successfully" May 13 00:33:35.375497 kubelet[1900]: E0513 00:33:35.375453 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:35.584727 kubelet[1900]: I0513 00:33:35.584598 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-ddgrp" podStartSLOduration=4.003879082 podStartE2EDuration="6.584580202s" podCreationTimestamp="2025-05-13 00:33:29 +0000 UTC" firstStartedPulling="2025-05-13 00:33:31.974723202 +0000 UTC m=+14.633906161" lastFinishedPulling="2025-05-13 00:33:34.555424322 +0000 UTC m=+17.214607281" observedRunningTime="2025-05-13 00:33:35.584233602 +0000 UTC m=+18.243416521" watchObservedRunningTime="2025-05-13 00:33:35.584580202 +0000 UTC m=+18.243763121" May 13 00:33:36.140650 kubelet[1900]: I0513 00:33:36.140044 1900 topology_manager.go:215] "Topology Admit Handler" podUID="ab339818-3e06-4ba2-8e7f-a28161f02a27" podNamespace="calico-apiserver" podName="calico-apiserver-5db7694f6b-ps2x8" May 13 00:33:36.190413 kubelet[1900]: I0513 00:33:36.190365 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ab339818-3e06-4ba2-8e7f-a28161f02a27-calico-apiserver-certs\") pod \"calico-apiserver-5db7694f6b-ps2x8\" (UID: \"ab339818-3e06-4ba2-8e7f-a28161f02a27\") " pod="calico-apiserver/calico-apiserver-5db7694f6b-ps2x8" May 13 00:33:36.190413 kubelet[1900]: I0513 00:33:36.190414 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkmmg\" (UniqueName: \"kubernetes.io/projected/ab339818-3e06-4ba2-8e7f-a28161f02a27-kube-api-access-tkmmg\") pod \"calico-apiserver-5db7694f6b-ps2x8\" (UID: \"ab339818-3e06-4ba2-8e7f-a28161f02a27\") " pod="calico-apiserver/calico-apiserver-5db7694f6b-ps2x8" May 13 00:33:36.376169 kubelet[1900]: E0513 00:33:36.376118 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:36.443570 containerd[1556]: time="2025-05-13T00:33:36.443477042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db7694f6b-ps2x8,Uid:ab339818-3e06-4ba2-8e7f-a28161f02a27,Namespace:calico-apiserver,Attempt:0,}" May 13 00:33:36.553294 systemd-networkd[1237]: calid5f3510ad94: Link UP May 13 00:33:36.554036 systemd-networkd[1237]: calid5f3510ad94: Gained carrier May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.488 [INFO][2902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0 calico-apiserver-5db7694f6b- calico-apiserver ab339818-3e06-4ba2-8e7f-a28161f02a27 1086 0 2025-05-13 00:33:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5db7694f6b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s 10.0.0.113 calico-apiserver-5db7694f6b-ps2x8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid5f3510ad94 [] []}} ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Namespace="calico-apiserver" Pod="calico-apiserver-5db7694f6b-ps2x8" WorkloadEndpoint="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.488 [INFO][2902] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Namespace="calico-apiserver" Pod="calico-apiserver-5db7694f6b-ps2x8" WorkloadEndpoint="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.512 [INFO][2916] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" HandleID="k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Workload="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.523 [INFO][2916] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" HandleID="k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Workload="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005a1aa0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"10.0.0.113", "pod":"calico-apiserver-5db7694f6b-ps2x8", "timestamp":"2025-05-13 00:33:36.512716802 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.523 [INFO][2916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.523 [INFO][2916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.524 [INFO][2916] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.525 [INFO][2916] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.529 [INFO][2916] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.533 [INFO][2916] ipam/ipam.go 489: Trying affinity for 192.168.6.192/26 host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.535 [INFO][2916] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.537 [INFO][2916] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.537 [INFO][2916] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.538 [INFO][2916] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980 May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.542 [INFO][2916] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.546 [INFO][2916] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.194/26] block=192.168.6.192/26 handle="k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.546 [INFO][2916] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.194/26] handle="k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" host="10.0.0.113" May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.546 [INFO][2916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:33:36.563826 containerd[1556]: 2025-05-13 00:33:36.546 [INFO][2916] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.194/26] IPv6=[] ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" HandleID="k8s-pod-network.84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Workload="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" May 13 00:33:36.564341 containerd[1556]: 2025-05-13 00:33:36.548 [INFO][2902] cni-plugin/k8s.go 386: Populated endpoint ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Namespace="calico-apiserver" Pod="calico-apiserver-5db7694f6b-ps2x8" WorkloadEndpoint="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0", GenerateName:"calico-apiserver-5db7694f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab339818-3e06-4ba2-8e7f-a28161f02a27", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db7694f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"calico-apiserver-5db7694f6b-ps2x8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5f3510ad94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:36.564341 containerd[1556]: 2025-05-13 00:33:36.550 [INFO][2902] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.194/32] ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Namespace="calico-apiserver" Pod="calico-apiserver-5db7694f6b-ps2x8" WorkloadEndpoint="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" May 13 00:33:36.564341 containerd[1556]: 2025-05-13 00:33:36.550 [INFO][2902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5f3510ad94 ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Namespace="calico-apiserver" Pod="calico-apiserver-5db7694f6b-ps2x8" WorkloadEndpoint="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" May 13 00:33:36.564341 containerd[1556]: 2025-05-13 00:33:36.554 [INFO][2902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Namespace="calico-apiserver" Pod="calico-apiserver-5db7694f6b-ps2x8" WorkloadEndpoint="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" May 13 00:33:36.564341 containerd[1556]: 2025-05-13 00:33:36.554 [INFO][2902] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Namespace="calico-apiserver" Pod="calico-apiserver-5db7694f6b-ps2x8" WorkloadEndpoint="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0", GenerateName:"calico-apiserver-5db7694f6b-", Namespace:"calico-apiserver", SelfLink:"", UID:"ab339818-3e06-4ba2-8e7f-a28161f02a27", ResourceVersion:"1086", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5db7694f6b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980", Pod:"calico-apiserver-5db7694f6b-ps2x8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid5f3510ad94", MAC:"2e:b6:72:11:5a:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:36.564341 containerd[1556]: 2025-05-13 00:33:36.561 [INFO][2902] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980" Namespace="calico-apiserver" Pod="calico-apiserver-5db7694f6b-ps2x8" WorkloadEndpoint="10.0.0.113-k8s-calico--apiserver--5db7694f6b--ps2x8-eth0" May 13 00:33:36.585342 containerd[1556]: time="2025-05-13T00:33:36.585242842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:36.585825 containerd[1556]: time="2025-05-13T00:33:36.585329162Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:36.585825 containerd[1556]: time="2025-05-13T00:33:36.585344442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:36.585825 containerd[1556]: time="2025-05-13T00:33:36.585427642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:36.608228 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:33:36.660914 containerd[1556]: time="2025-05-13T00:33:36.660697002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5db7694f6b-ps2x8,Uid:ab339818-3e06-4ba2-8e7f-a28161f02a27,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980\"" May 13 00:33:36.662914 containerd[1556]: time="2025-05-13T00:33:36.662695722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\"" May 13 00:33:37.377248 kubelet[1900]: E0513 00:33:37.377193 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:37.945090 containerd[1556]: time="2025-05-13T00:33:37.945025122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:37.945554 containerd[1556]: time="2025-05-13T00:33:37.945515802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.3: active requests=0, bytes read=40247603" May 13 00:33:37.946495 containerd[1556]: time="2025-05-13T00:33:37.946459722Z" level=info msg="ImageCreate event name:\"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:37.948486 containerd[1556]: time="2025-05-13T00:33:37.948437002Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:37.949234 containerd[1556]: time="2025-05-13T00:33:37.949066562Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" with image id \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:bcb659f25f9aebaa389ed1dbb65edb39478ddf82c57d07d8da474e8cab38d77b\", size \"41616801\" in 1.28633996s" May 13 00:33:37.949234 containerd[1556]: time="2025-05-13T00:33:37.949097122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.3\" returns image reference \"sha256:eca64fb9fcc40e83ed2310ac1fab340ba460a939c54e10dc0b7428f02b9b6253\"" May 13 00:33:37.951416 containerd[1556]: time="2025-05-13T00:33:37.951379642Z" level=info msg="CreateContainer within sandbox \"84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 13 00:33:38.058511 containerd[1556]: time="2025-05-13T00:33:38.058470562Z" level=info msg="CreateContainer within sandbox \"84c9b5920e1961ec2492a260c2dddf2a595bf8434c1f1ffc2e6498fdd3cae980\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8b267a3b28f718e4c0666bf30d8f7b5f00f6f2021e935e902f30f1359935a916\"" May 13 00:33:38.059367 containerd[1556]: time="2025-05-13T00:33:38.059331722Z" level=info msg="StartContainer for \"8b267a3b28f718e4c0666bf30d8f7b5f00f6f2021e935e902f30f1359935a916\"" May 13 00:33:38.118795 containerd[1556]: time="2025-05-13T00:33:38.118745522Z" level=info msg="StartContainer for \"8b267a3b28f718e4c0666bf30d8f7b5f00f6f2021e935e902f30f1359935a916\" returns successfully" May 13 00:33:38.126901 systemd-networkd[1237]: calid5f3510ad94: Gained IPv6LL May 13 00:33:38.361128 kubelet[1900]: E0513 00:33:38.361023 1900 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:38.377894 kubelet[1900]: E0513 00:33:38.377857 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:39.378763 kubelet[1900]: E0513 00:33:39.378724 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:39.576955 kubelet[1900]: I0513 00:33:39.576913 1900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:33:40.379705 kubelet[1900]: E0513 00:33:40.379661 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:40.519649 containerd[1556]: time="2025-05-13T00:33:40.519341602Z" level=info msg="StopPodSandbox for \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\"" May 13 00:33:40.580428 kubelet[1900]: I0513 00:33:40.580350 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5db7694f6b-ps2x8" podStartSLOduration=3.292422642 podStartE2EDuration="4.580332962s" podCreationTimestamp="2025-05-13 00:33:36 +0000 UTC" firstStartedPulling="2025-05-13 00:33:36.662279122 +0000 UTC m=+19.321462081" lastFinishedPulling="2025-05-13 00:33:37.950189442 +0000 UTC m=+20.609372401" observedRunningTime="2025-05-13 00:33:38.584831882 +0000 UTC m=+21.244014801" watchObservedRunningTime="2025-05-13 00:33:40.580332962 +0000 UTC m=+23.239515921" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.580 [INFO][3054] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.580 [INFO][3054] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" iface="eth0" netns="/var/run/netns/cni-de355b25-e05c-71b0-b3c3-af2d050684d4" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.581 [INFO][3054] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" iface="eth0" netns="/var/run/netns/cni-de355b25-e05c-71b0-b3c3-af2d050684d4" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.581 [INFO][3054] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" iface="eth0" netns="/var/run/netns/cni-de355b25-e05c-71b0-b3c3-af2d050684d4" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.581 [INFO][3054] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.581 [INFO][3054] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.600 [INFO][3064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" HandleID="k8s-pod-network.14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" Workload="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.601 [INFO][3064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.601 [INFO][3064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.611 [WARNING][3064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" HandleID="k8s-pod-network.14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" Workload="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.611 [INFO][3064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" HandleID="k8s-pod-network.14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" Workload="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.613 [INFO][3064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:33:40.615987 containerd[1556]: 2025-05-13 00:33:40.614 [INFO][3054] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1" May 13 00:33:40.616606 containerd[1556]: time="2025-05-13T00:33:40.616556602Z" level=info msg="TearDown network for sandbox \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\" successfully" May 13 00:33:40.616606 containerd[1556]: time="2025-05-13T00:33:40.616591802Z" level=info msg="StopPodSandbox for \"14ee8288a5f4deec7659500b5877565034c985a15d413b8597385b2056c15cf1\" returns successfully" May 13 00:33:40.617250 containerd[1556]: time="2025-05-13T00:33:40.617222842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krtjb,Uid:3c0a2164-4e36-4a38-9d6e-97e893941f35,Namespace:calico-system,Attempt:1,}" May 13 00:33:40.618136 systemd[1]: run-netns-cni\x2dde355b25\x2de05c\x2d71b0\x2db3c3\x2daf2d050684d4.mount: Deactivated successfully. May 13 00:33:40.725553 systemd-networkd[1237]: cali34017e6fec4: Link UP May 13 00:33:40.727169 systemd-networkd[1237]: cali34017e6fec4: Gained carrier May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.656 [INFO][3079] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-csi--node--driver--krtjb-eth0 csi-node-driver- calico-system 3c0a2164-4e36-4a38-9d6e-97e893941f35 1123 0 2025-05-13 00:33:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b7b4b9d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.113 csi-node-driver-krtjb eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali34017e6fec4 [] []}} ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Namespace="calico-system" Pod="csi-node-driver-krtjb" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--krtjb-" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.656 [INFO][3079] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Namespace="calico-system" Pod="csi-node-driver-krtjb" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.682 [INFO][3089] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" HandleID="k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Workload="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.694 [INFO][3089] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" HandleID="k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Workload="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000502820), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.113", "pod":"csi-node-driver-krtjb", "timestamp":"2025-05-13 00:33:40.682457002 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.694 [INFO][3089] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.694 [INFO][3089] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.694 [INFO][3089] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.696 [INFO][3089] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.700 [INFO][3089] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.704 [INFO][3089] ipam/ipam.go 489: Trying affinity for 192.168.6.192/26 host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.706 [INFO][3089] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.709 [INFO][3089] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.709 [INFO][3089] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.711 [INFO][3089] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71 May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.714 [INFO][3089] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.720 [INFO][3089] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.195/26] block=192.168.6.192/26 handle="k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.720 [INFO][3089] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.195/26] handle="k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" host="10.0.0.113" May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.720 [INFO][3089] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:33:40.740741 containerd[1556]: 2025-05-13 00:33:40.720 [INFO][3089] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.195/26] IPv6=[] ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" HandleID="k8s-pod-network.3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Workload="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.741351 containerd[1556]: 2025-05-13 00:33:40.722 [INFO][3079] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Namespace="calico-system" Pod="csi-node-driver-krtjb" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-csi--node--driver--krtjb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c0a2164-4e36-4a38-9d6e-97e893941f35", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"csi-node-driver-krtjb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34017e6fec4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:40.741351 containerd[1556]: 2025-05-13 00:33:40.722 [INFO][3079] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.195/32] ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Namespace="calico-system" Pod="csi-node-driver-krtjb" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.741351 containerd[1556]: 2025-05-13 00:33:40.722 [INFO][3079] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali34017e6fec4 ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Namespace="calico-system" Pod="csi-node-driver-krtjb" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.741351 containerd[1556]: 2025-05-13 00:33:40.726 [INFO][3079] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Namespace="calico-system" Pod="csi-node-driver-krtjb" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.741351 containerd[1556]: 2025-05-13 00:33:40.727 [INFO][3079] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Namespace="calico-system" Pod="csi-node-driver-krtjb" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-csi--node--driver--krtjb-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3c0a2164-4e36-4a38-9d6e-97e893941f35", ResourceVersion:"1123", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b7b4b9d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71", Pod:"csi-node-driver-krtjb", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali34017e6fec4", MAC:"7a:3d:03:f4:0c:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:40.741351 containerd[1556]: 2025-05-13 00:33:40.738 [INFO][3079] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71" Namespace="calico-system" Pod="csi-node-driver-krtjb" WorkloadEndpoint="10.0.0.113-k8s-csi--node--driver--krtjb-eth0" May 13 00:33:40.758374 containerd[1556]: time="2025-05-13T00:33:40.758289322Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:40.758482 containerd[1556]: time="2025-05-13T00:33:40.758381282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:40.758482 containerd[1556]: time="2025-05-13T00:33:40.758411402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:40.758537 containerd[1556]: time="2025-05-13T00:33:40.758509042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:40.777906 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:33:40.789613 containerd[1556]: time="2025-05-13T00:33:40.789560282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-krtjb,Uid:3c0a2164-4e36-4a38-9d6e-97e893941f35,Namespace:calico-system,Attempt:1,} returns sandbox id \"3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71\"" May 13 00:33:40.791357 containerd[1556]: time="2025-05-13T00:33:40.791270602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" May 13 00:33:41.379821 kubelet[1900]: E0513 00:33:41.379780 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:41.733978 containerd[1556]: time="2025-05-13T00:33:41.733851122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:41.734449 containerd[1556]: time="2025-05-13T00:33:41.734407402Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7474935" May 13 00:33:41.735202 containerd[1556]: time="2025-05-13T00:33:41.735164162Z" level=info msg="ImageCreate event name:\"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:41.737192 containerd[1556]: time="2025-05-13T00:33:41.737122642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:41.738563 containerd[1556]: time="2025-05-13T00:33:41.738525242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"8844117\" in 947.11688ms" May 13 00:33:41.738563 containerd[1556]: time="2025-05-13T00:33:41.738556242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:15faf29e8b518d846c91c15785ff89e783d356ea0f2b22826f47a556ea32645b\"" May 13 00:33:41.740767 containerd[1556]: time="2025-05-13T00:33:41.740650802Z" level=info msg="CreateContainer within sandbox \"3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 13 00:33:41.756781 containerd[1556]: time="2025-05-13T00:33:41.756729282Z" level=info msg="CreateContainer within sandbox \"3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a29a7d5fc0ce6c2bbef35f26601a43ed1b17cc00f5e962eced24bbf6e4c9ed07\"" May 13 00:33:41.757452 containerd[1556]: time="2025-05-13T00:33:41.757355402Z" level=info msg="StartContainer for \"a29a7d5fc0ce6c2bbef35f26601a43ed1b17cc00f5e962eced24bbf6e4c9ed07\"" May 13 00:33:41.815826 containerd[1556]: time="2025-05-13T00:33:41.815783522Z" level=info msg="StartContainer for \"a29a7d5fc0ce6c2bbef35f26601a43ed1b17cc00f5e962eced24bbf6e4c9ed07\" returns successfully" May 13 00:33:41.817247 containerd[1556]: time="2025-05-13T00:33:41.817067562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" May 13 00:33:42.380661 kubelet[1900]: E0513 00:33:42.380623 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:42.670780 systemd-networkd[1237]: cali34017e6fec4: Gained IPv6LL May 13 00:33:42.778737 kubelet[1900]: I0513 00:33:42.778557 1900 topology_manager.go:215] "Topology Admit Handler" podUID="c649a477-d4cd-4391-8e0f-9104c2ff2c9b" podNamespace="default" podName="nfs-server-provisioner-0" May 13 00:33:42.778853 containerd[1556]: time="2025-05-13T00:33:42.778661025Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:42.779556 containerd[1556]: time="2025-05-13T00:33:42.779106429Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13124299" May 13 00:33:42.781579 containerd[1556]: time="2025-05-13T00:33:42.781272604Z" level=info msg="ImageCreate event name:\"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:42.783925 containerd[1556]: time="2025-05-13T00:33:42.783891103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:42.785086 containerd[1556]: time="2025-05-13T00:33:42.785054551Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"14493433\" in 967.952949ms" May 13 00:33:42.785216 containerd[1556]: time="2025-05-13T00:33:42.785198552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:a91b1f00752edc175f270a01b33683fa80818734aa2274388785eaf3364315dc\"" May 13 00:33:42.787301 containerd[1556]: time="2025-05-13T00:33:42.787214927Z" level=info msg="CreateContainer within sandbox \"3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 13 00:33:42.801775 containerd[1556]: time="2025-05-13T00:33:42.801718430Z" level=info msg="CreateContainer within sandbox \"3094329bbf342c091bc048f2cbbebcb9171e9962b9655b04fd0e863e10dd0d71\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"f68232e6c639cb86c1ab90f146095760ae08c94536fe3cb6a364d4a868254fbb\"" May 13 00:33:42.802487 containerd[1556]: time="2025-05-13T00:33:42.802458076Z" level=info msg="StartContainer for \"f68232e6c639cb86c1ab90f146095760ae08c94536fe3cb6a364d4a868254fbb\"" May 13 00:33:42.886677 containerd[1556]: time="2025-05-13T00:33:42.886630198Z" level=info msg="StartContainer for \"f68232e6c639cb86c1ab90f146095760ae08c94536fe3cb6a364d4a868254fbb\" returns successfully" May 13 00:33:42.919476 kubelet[1900]: I0513 00:33:42.919349 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/c649a477-d4cd-4391-8e0f-9104c2ff2c9b-data\") pod \"nfs-server-provisioner-0\" (UID: \"c649a477-d4cd-4391-8e0f-9104c2ff2c9b\") " pod="default/nfs-server-provisioner-0" May 13 00:33:42.919476 kubelet[1900]: I0513 00:33:42.919408 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lqpm\" (UniqueName: \"kubernetes.io/projected/c649a477-d4cd-4391-8e0f-9104c2ff2c9b-kube-api-access-6lqpm\") pod \"nfs-server-provisioner-0\" (UID: \"c649a477-d4cd-4391-8e0f-9104c2ff2c9b\") " pod="default/nfs-server-provisioner-0" May 13 00:33:43.082538 containerd[1556]: time="2025-05-13T00:33:43.082133721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c649a477-d4cd-4391-8e0f-9104c2ff2c9b,Namespace:default,Attempt:0,}" May 13 00:33:43.226137 systemd-networkd[1237]: cali60e51b789ff: Link UP May 13 00:33:43.227278 systemd-networkd[1237]: cali60e51b789ff: Gained carrier May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.138 [INFO][3245] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default c649a477-d4cd-4391-8e0f-9104c2ff2c9b 1153 0 2025-05-13 00:33:42 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.113 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.138 [INFO][3245] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.172 [INFO][3259] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" HandleID="k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Workload="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.185 [INFO][3259] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" HandleID="k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Workload="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003068c0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.113", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-13 00:33:43.172684888 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.185 [INFO][3259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.185 [INFO][3259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.185 [INFO][3259] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.187 [INFO][3259] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.191 [INFO][3259] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.200 [INFO][3259] ipam/ipam.go 489: Trying affinity for 192.168.6.192/26 host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.205 [INFO][3259] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.207 [INFO][3259] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.207 [INFO][3259] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.209 [INFO][3259] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04 May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.215 [INFO][3259] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.221 [INFO][3259] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.196/26] block=192.168.6.192/26 handle="k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.221 [INFO][3259] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.196/26] handle="k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" host="10.0.0.113" May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.221 [INFO][3259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:33:43.246990 containerd[1556]: 2025-05-13 00:33:43.221 [INFO][3259] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.196/26] IPv6=[] ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" HandleID="k8s-pod-network.1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Workload="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" May 13 00:33:43.247885 containerd[1556]: 2025-05-13 00:33:43.223 [INFO][3245] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c649a477-d4cd-4391-8e0f-9104c2ff2c9b", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.6.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:43.247885 containerd[1556]: 2025-05-13 00:33:43.223 [INFO][3245] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.196/32] ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" May 13 00:33:43.247885 containerd[1556]: 2025-05-13 00:33:43.223 [INFO][3245] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" May 13 00:33:43.247885 containerd[1556]: 2025-05-13 00:33:43.228 [INFO][3245] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" May 13 00:33:43.248070 containerd[1556]: 2025-05-13 00:33:43.228 [INFO][3245] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"c649a477-d4cd-4391-8e0f-9104c2ff2c9b", ResourceVersion:"1153", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.6.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3a:4a:c6:73:ac:9d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:43.248070 containerd[1556]: 2025-05-13 00:33:43.245 [INFO][3245] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.113-k8s-nfs--server--provisioner--0-eth0" May 13 00:33:43.265272 containerd[1556]: time="2025-05-13T00:33:43.264862786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:43.265272 containerd[1556]: time="2025-05-13T00:33:43.265237989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:43.265272 containerd[1556]: time="2025-05-13T00:33:43.265251309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:43.265481 containerd[1556]: time="2025-05-13T00:33:43.265326389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:43.290169 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:33:43.306688 containerd[1556]: time="2025-05-13T00:33:43.306654666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:c649a477-d4cd-4391-8e0f-9104c2ff2c9b,Namespace:default,Attempt:0,} returns sandbox id \"1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04\"" May 13 00:33:43.308620 containerd[1556]: time="2025-05-13T00:33:43.308580559Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 13 00:33:43.381346 kubelet[1900]: E0513 00:33:43.381292 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:43.530170 kubelet[1900]: I0513 00:33:43.530123 1900 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 13 00:33:43.530170 kubelet[1900]: I0513 00:33:43.530169 1900 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 13 00:33:44.382196 kubelet[1900]: E0513 00:33:44.382159 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:44.908053 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3462611671.mount: Deactivated successfully. May 13 00:33:45.101782 systemd-networkd[1237]: cali60e51b789ff: Gained IPv6LL May 13 00:33:45.383194 kubelet[1900]: E0513 00:33:45.383163 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:46.226158 containerd[1556]: time="2025-05-13T00:33:46.226102267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:46.227723 containerd[1556]: time="2025-05-13T00:33:46.227683636Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=87373625" May 13 00:33:46.231081 containerd[1556]: time="2025-05-13T00:33:46.228878163Z" level=info msg="ImageCreate event name:\"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:46.232287 containerd[1556]: time="2025-05-13T00:33:46.232242981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:46.233355 containerd[1556]: time="2025-05-13T00:33:46.233317067Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"87371201\" in 2.924606307s" May 13 00:33:46.233419 containerd[1556]: time="2025-05-13T00:33:46.233354867Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 13 00:33:46.236035 containerd[1556]: time="2025-05-13T00:33:46.236004562Z" level=info msg="CreateContainer within sandbox \"1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 13 00:33:46.256721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079601446.mount: Deactivated successfully. May 13 00:33:46.260090 containerd[1556]: time="2025-05-13T00:33:46.260057215Z" level=info msg="CreateContainer within sandbox \"1dc7004aebdc098aff0970323845af63789aea35b268420e32cee3c3d460dd04\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"754c6f4234255ed390683cec465791be77b91b73c9e3f7777ce07211bf3fe3ae\"" May 13 00:33:46.261625 containerd[1556]: time="2025-05-13T00:33:46.261093981Z" level=info msg="StartContainer for \"754c6f4234255ed390683cec465791be77b91b73c9e3f7777ce07211bf3fe3ae\"" May 13 00:33:46.383576 kubelet[1900]: E0513 00:33:46.383522 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:46.393177 containerd[1556]: time="2025-05-13T00:33:46.393138750Z" level=info msg="StartContainer for \"754c6f4234255ed390683cec465791be77b91b73c9e3f7777ce07211bf3fe3ae\" returns successfully" May 13 00:33:46.615437 kubelet[1900]: I0513 00:33:46.615284 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.688655098 podStartE2EDuration="4.615267257s" podCreationTimestamp="2025-05-13 00:33:42 +0000 UTC" firstStartedPulling="2025-05-13 00:33:43.307998715 +0000 UTC m=+25.967181674" lastFinishedPulling="2025-05-13 00:33:46.234610874 +0000 UTC m=+28.893793833" observedRunningTime="2025-05-13 00:33:46.615052096 +0000 UTC m=+29.274235055" watchObservedRunningTime="2025-05-13 00:33:46.615267257 +0000 UTC m=+29.274450216" May 13 00:33:46.615571 kubelet[1900]: I0513 00:33:46.615497 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-krtjb" podStartSLOduration=26.620409784 podStartE2EDuration="28.615490699s" podCreationTimestamp="2025-05-13 00:33:18 +0000 UTC" firstStartedPulling="2025-05-13 00:33:40.790860402 +0000 UTC m=+23.450043361" lastFinishedPulling="2025-05-13 00:33:42.785941317 +0000 UTC m=+25.445124276" observedRunningTime="2025-05-13 00:33:43.607916686 +0000 UTC m=+26.267099645" watchObservedRunningTime="2025-05-13 00:33:46.615490699 +0000 UTC m=+29.274673618" May 13 00:33:47.393694 kubelet[1900]: E0513 00:33:47.384267 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:48.385078 kubelet[1900]: E0513 00:33:48.385029 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:49.385898 kubelet[1900]: E0513 00:33:49.385854 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:49.763340 kubelet[1900]: I0513 00:33:49.763218 1900 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 13 00:33:49.764004 kubelet[1900]: E0513 00:33:49.763949 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:50.386157 kubelet[1900]: E0513 00:33:50.386111 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:50.611578 kubelet[1900]: E0513 00:33:50.611513 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:51.387043 kubelet[1900]: E0513 00:33:51.387001 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:52.388053 kubelet[1900]: E0513 00:33:52.387998 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:53.388215 kubelet[1900]: E0513 00:33:53.388157 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:54.389023 kubelet[1900]: E0513 00:33:54.388973 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:54.704804 update_engine[1538]: I20250513 00:33:54.704648 1538 update_attempter.cc:509] Updating boot flags... May 13 00:33:54.734634 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3486) May 13 00:33:54.767660 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3488) May 13 00:33:54.792636 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3488) May 13 00:33:55.390006 kubelet[1900]: E0513 00:33:55.389931 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:56.390581 kubelet[1900]: E0513 00:33:56.390533 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:56.492817 kubelet[1900]: I0513 00:33:56.492774 1900 topology_manager.go:215] "Topology Admit Handler" podUID="e0253009-e6e5-42a6-850a-32397f118a78" podNamespace="default" podName="test-pod-1" May 13 00:33:56.604087 kubelet[1900]: I0513 00:33:56.604036 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-24430a1d-a1da-45d7-a540-8b84a3aa82fa\" (UniqueName: \"kubernetes.io/nfs/e0253009-e6e5-42a6-850a-32397f118a78-pvc-24430a1d-a1da-45d7-a540-8b84a3aa82fa\") pod \"test-pod-1\" (UID: \"e0253009-e6e5-42a6-850a-32397f118a78\") " pod="default/test-pod-1" May 13 00:33:56.604087 kubelet[1900]: I0513 00:33:56.604087 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhlp\" (UniqueName: \"kubernetes.io/projected/e0253009-e6e5-42a6-850a-32397f118a78-kube-api-access-4nhlp\") pod \"test-pod-1\" (UID: \"e0253009-e6e5-42a6-850a-32397f118a78\") " pod="default/test-pod-1" May 13 00:33:56.724643 kernel: FS-Cache: Loaded May 13 00:33:56.748681 kernel: RPC: Registered named UNIX socket transport module. May 13 00:33:56.748822 kernel: RPC: Registered udp transport module. May 13 00:33:56.748841 kernel: RPC: Registered tcp transport module. May 13 00:33:56.748865 kernel: RPC: Registered tcp-with-tls transport module. May 13 00:33:56.750045 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 13 00:33:56.783550 kubelet[1900]: I0513 00:33:56.781742 1900 topology_manager.go:215] "Topology Admit Handler" podUID="b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd" podNamespace="calico-system" podName="calico-typha-6b4fb85cbb-kqtwc" May 13 00:33:56.905747 kubelet[1900]: I0513 00:33:56.905704 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd-tigera-ca-bundle\") pod \"calico-typha-6b4fb85cbb-kqtwc\" (UID: \"b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd\") " pod="calico-system/calico-typha-6b4fb85cbb-kqtwc" May 13 00:33:56.905747 kubelet[1900]: I0513 00:33:56.905751 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b67jf\" (UniqueName: \"kubernetes.io/projected/b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd-kube-api-access-b67jf\") pod \"calico-typha-6b4fb85cbb-kqtwc\" (UID: \"b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd\") " pod="calico-system/calico-typha-6b4fb85cbb-kqtwc" May 13 00:33:56.905945 kubelet[1900]: I0513 00:33:56.905773 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd-typha-certs\") pod \"calico-typha-6b4fb85cbb-kqtwc\" (UID: \"b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd\") " pod="calico-system/calico-typha-6b4fb85cbb-kqtwc" May 13 00:33:56.934167 containerd[1556]: time="2025-05-13T00:33:56.934130092Z" level=info msg="StopContainer for \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\" with timeout 5 (s)" May 13 00:33:56.934651 containerd[1556]: time="2025-05-13T00:33:56.934330373Z" level=info msg="Stop container \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\" with signal terminated" May 13 00:33:56.941012 kernel: NFS: Registering the id_resolver key type May 13 00:33:56.941124 kernel: Key type id_resolver registered May 13 00:33:56.941146 kernel: Key type id_legacy registered May 13 00:33:56.967189 containerd[1556]: time="2025-05-13T00:33:56.967095868Z" level=info msg="shim disconnected" id=8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685 namespace=k8s.io May 13 00:33:56.967189 containerd[1556]: time="2025-05-13T00:33:56.967183788Z" level=warning msg="cleaning up after shim disconnected" id=8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685 namespace=k8s.io May 13 00:33:56.967189 containerd[1556]: time="2025-05-13T00:33:56.967194468Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:33:56.968315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685-rootfs.mount: Deactivated successfully. May 13 00:33:56.972137 nfsidmap[3548]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:33:56.975743 nfsidmap[3563]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 13 00:33:56.989182 containerd[1556]: time="2025-05-13T00:33:56.989123772Z" level=info msg="StopContainer for \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\" returns successfully" May 13 00:33:56.989732 containerd[1556]: time="2025-05-13T00:33:56.989690933Z" level=info msg="StopPodSandbox for \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\"" May 13 00:33:56.989792 containerd[1556]: time="2025-05-13T00:33:56.989742014Z" level=info msg="Container to stop \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:33:56.989792 containerd[1556]: time="2025-05-13T00:33:56.989755214Z" level=info msg="Container to stop \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:33:56.989792 containerd[1556]: time="2025-05-13T00:33:56.989765014Z" level=info msg="Container to stop \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 13 00:33:57.010714 containerd[1556]: time="2025-05-13T00:33:57.010465512Z" level=info msg="shim disconnected" id=619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8 namespace=k8s.io May 13 00:33:57.010714 containerd[1556]: time="2025-05-13T00:33:57.010517272Z" level=warning msg="cleaning up after shim disconnected" id=619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8 namespace=k8s.io May 13 00:33:57.010714 containerd[1556]: time="2025-05-13T00:33:57.010525832Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:33:57.024736 containerd[1556]: time="2025-05-13T00:33:57.024687471Z" level=info msg="TearDown network for sandbox \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\" successfully" May 13 00:33:57.024736 containerd[1556]: time="2025-05-13T00:33:57.024724071Z" level=info msg="StopPodSandbox for \"619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8\" returns successfully" May 13 00:33:57.062626 kubelet[1900]: I0513 00:33:57.060456 1900 topology_manager.go:215] "Topology Admit Handler" podUID="397f22b8-3a3d-4074-a31e-ee4585410ef7" podNamespace="calico-system" podName="calico-node-ksr2p" May 13 00:33:57.062626 kubelet[1900]: E0513 00:33:57.060517 1900 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e966b341-3de5-4e67-9f8b-231e82f2bd6b" containerName="flexvol-driver" May 13 00:33:57.062626 kubelet[1900]: E0513 00:33:57.060527 1900 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e966b341-3de5-4e67-9f8b-231e82f2bd6b" containerName="calico-node" May 13 00:33:57.062626 kubelet[1900]: E0513 00:33:57.060533 1900 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e966b341-3de5-4e67-9f8b-231e82f2bd6b" containerName="install-cni" May 13 00:33:57.062626 kubelet[1900]: I0513 00:33:57.060551 1900 memory_manager.go:354] "RemoveStaleState removing state" podUID="e966b341-3de5-4e67-9f8b-231e82f2bd6b" containerName="calico-node" May 13 00:33:57.086008 kubelet[1900]: E0513 00:33:57.085971 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:57.086647 containerd[1556]: time="2025-05-13T00:33:57.086443438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b4fb85cbb-kqtwc,Uid:b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd,Namespace:calico-system,Attempt:0,}" May 13 00:33:57.097973 containerd[1556]: time="2025-05-13T00:33:57.097930950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e0253009-e6e5-42a6-850a-32397f118a78,Namespace:default,Attempt:0,}" May 13 00:33:57.105444 containerd[1556]: time="2025-05-13T00:33:57.105238409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:57.105444 containerd[1556]: time="2025-05-13T00:33:57.105297370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:57.105444 containerd[1556]: time="2025-05-13T00:33:57.105311330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:57.105444 containerd[1556]: time="2025-05-13T00:33:57.105407970Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:57.146572 containerd[1556]: time="2025-05-13T00:33:57.146532962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6b4fb85cbb-kqtwc,Uid:b2b1a1a9-0f09-4ff3-9f85-02f1d7bb67dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"5632f6398384396fe2e45f2c6c75a7551bd81b23817b973462cbf046d9e1cb41\"" May 13 00:33:57.147172 kubelet[1900]: E0513 00:33:57.147151 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:57.147997 containerd[1556]: time="2025-05-13T00:33:57.147974006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\"" May 13 00:33:57.208059 kubelet[1900]: I0513 00:33:57.208021 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-bin-dir\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208059 kubelet[1900]: I0513 00:33:57.208061 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-lib-modules\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208214 kubelet[1900]: I0513 00:33:57.208081 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-var-run-calico\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208214 kubelet[1900]: I0513 00:33:57.208097 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-flexvol-driver-host\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208214 kubelet[1900]: I0513 00:33:57.208119 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-log-dir\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208214 kubelet[1900]: I0513 00:33:57.208133 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-var-lib-calico\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208214 kubelet[1900]: I0513 00:33:57.208156 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e966b341-3de5-4e67-9f8b-231e82f2bd6b-tigera-ca-bundle\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208214 kubelet[1900]: I0513 00:33:57.208169 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-net-dir\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208344 kubelet[1900]: I0513 00:33:57.208209 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e966b341-3de5-4e67-9f8b-231e82f2bd6b-node-certs\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208344 kubelet[1900]: I0513 00:33:57.208223 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-xtables-lock\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208344 kubelet[1900]: I0513 00:33:57.208239 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-policysync\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208344 kubelet[1900]: I0513 00:33:57.208258 1900 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zghsh\" (UniqueName: \"kubernetes.io/projected/e966b341-3de5-4e67-9f8b-231e82f2bd6b-kube-api-access-zghsh\") pod \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\" (UID: \"e966b341-3de5-4e67-9f8b-231e82f2bd6b\") " May 13 00:33:57.208344 kubelet[1900]: I0513 00:33:57.208306 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-cni-net-dir\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208344 kubelet[1900]: I0513 00:33:57.208326 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-flexvol-driver-host\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208490 kubelet[1900]: I0513 00:33:57.208345 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/397f22b8-3a3d-4074-a31e-ee4585410ef7-tigera-ca-bundle\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208490 kubelet[1900]: I0513 00:33:57.208376 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5cx7\" (UniqueName: \"kubernetes.io/projected/397f22b8-3a3d-4074-a31e-ee4585410ef7-kube-api-access-r5cx7\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208490 kubelet[1900]: I0513 00:33:57.208399 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-var-run-calico\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208490 kubelet[1900]: I0513 00:33:57.208419 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-xtables-lock\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208490 kubelet[1900]: I0513 00:33:57.208434 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-policysync\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208627 kubelet[1900]: I0513 00:33:57.208450 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/397f22b8-3a3d-4074-a31e-ee4585410ef7-node-certs\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208627 kubelet[1900]: I0513 00:33:57.208465 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-var-lib-calico\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208627 kubelet[1900]: I0513 00:33:57.208484 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-cni-bin-dir\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208627 kubelet[1900]: I0513 00:33:57.208501 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-lib-modules\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208627 kubelet[1900]: I0513 00:33:57.208517 1900 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/397f22b8-3a3d-4074-a31e-ee4585410ef7-cni-log-dir\") pod \"calico-node-ksr2p\" (UID: \"397f22b8-3a3d-4074-a31e-ee4585410ef7\") " pod="calico-system/calico-node-ksr2p" May 13 00:33:57.208744 kubelet[1900]: I0513 00:33:57.208626 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.208744 kubelet[1900]: I0513 00:33:57.208668 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.208744 kubelet[1900]: I0513 00:33:57.208688 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.208744 kubelet[1900]: I0513 00:33:57.208707 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.208744 kubelet[1900]: I0513 00:33:57.208727 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.208860 kubelet[1900]: I0513 00:33:57.208746 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.209290 kubelet[1900]: I0513 00:33:57.208985 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-policysync" (OuterVolumeSpecName: "policysync") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.209290 kubelet[1900]: I0513 00:33:57.208996 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.209397 kubelet[1900]: I0513 00:33:57.209306 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 13 00:33:57.213568 kubelet[1900]: I0513 00:33:57.213525 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e966b341-3de5-4e67-9f8b-231e82f2bd6b-kube-api-access-zghsh" (OuterVolumeSpecName: "kube-api-access-zghsh") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "kube-api-access-zghsh". PluginName "kubernetes.io/projected", VolumeGidValue "" May 13 00:33:57.213746 kubelet[1900]: I0513 00:33:57.213706 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e966b341-3de5-4e67-9f8b-231e82f2bd6b-node-certs" (OuterVolumeSpecName: "node-certs") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" May 13 00:33:57.223510 kubelet[1900]: I0513 00:33:57.223468 1900 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e966b341-3de5-4e67-9f8b-231e82f2bd6b-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "e966b341-3de5-4e67-9f8b-231e82f2bd6b" (UID: "e966b341-3de5-4e67-9f8b-231e82f2bd6b"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 13 00:33:57.224277 systemd-networkd[1237]: cali5ec59c6bf6e: Link UP May 13 00:33:57.225095 systemd-networkd[1237]: cali5ec59c6bf6e: Gained carrier May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.153 [INFO][3636] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.113-k8s-test--pod--1-eth0 default e0253009-e6e5-42a6-850a-32397f118a78 1227 0 2025-05-13 00:33:43 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.113 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.153 [INFO][3636] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.176 [INFO][3659] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" HandleID="k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Workload="10.0.0.113-k8s-test--pod--1-eth0" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.187 [INFO][3659] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" HandleID="k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Workload="10.0.0.113-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400027b050), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.113", "pod":"test-pod-1", "timestamp":"2025-05-13 00:33:57.176651923 +0000 UTC"}, Hostname:"10.0.0.113", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.187 [INFO][3659] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.187 [INFO][3659] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.187 [INFO][3659] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.113' May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.189 [INFO][3659] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.193 [INFO][3659] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.198 [INFO][3659] ipam/ipam.go 489: Trying affinity for 192.168.6.192/26 host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.200 [INFO][3659] ipam/ipam.go 155: Attempting to load block cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.202 [INFO][3659] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.6.192/26 host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.202 [INFO][3659] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.6.192/26 handle="k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.204 [INFO][3659] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642 May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.207 [INFO][3659] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.6.192/26 handle="k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.215 [INFO][3659] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.6.197/26] block=192.168.6.192/26 handle="k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.215 [INFO][3659] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.6.197/26] handle="k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" host="10.0.0.113" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.215 [INFO][3659] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.215 [INFO][3659] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.197/26] IPv6=[] ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" HandleID="k8s-pod-network.a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Workload="10.0.0.113-k8s-test--pod--1-eth0" May 13 00:33:57.234209 containerd[1556]: 2025-05-13 00:33:57.220 [INFO][3636] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"e0253009-e6e5-42a6-850a-32397f118a78", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:57.234805 containerd[1556]: 2025-05-13 00:33:57.220 [INFO][3636] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.6.197/32] ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" May 13 00:33:57.234805 containerd[1556]: 2025-05-13 00:33:57.220 [INFO][3636] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" May 13 00:33:57.234805 containerd[1556]: 2025-05-13 00:33:57.225 [INFO][3636] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" May 13 00:33:57.234805 containerd[1556]: 2025-05-13 00:33:57.225 [INFO][3636] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.113-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"e0253009-e6e5-42a6-850a-32397f118a78", ResourceVersion:"1227", Generation:0, CreationTimestamp:time.Date(2025, time.May, 13, 0, 33, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.113", ContainerID:"a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.6.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"f6:59:fa:91:f6:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} May 13 00:33:57.234805 containerd[1556]: 2025-05-13 00:33:57.232 [INFO][3636] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.113-k8s-test--pod--1-eth0" May 13 00:33:57.250957 containerd[1556]: time="2025-05-13T00:33:57.250877645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:57.250957 containerd[1556]: time="2025-05-13T00:33:57.250944485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:57.250957 containerd[1556]: time="2025-05-13T00:33:57.250956325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:57.251126 containerd[1556]: time="2025-05-13T00:33:57.251041805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:57.270808 systemd-resolved[1445]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 13 00:33:57.294359 containerd[1556]: time="2025-05-13T00:33:57.294324123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:e0253009-e6e5-42a6-850a-32397f118a78,Namespace:default,Attempt:0,} returns sandbox id \"a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642\"" May 13 00:33:57.309732 kubelet[1900]: I0513 00:33:57.309596 1900 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e966b341-3de5-4e67-9f8b-231e82f2bd6b-tigera-ca-bundle\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309732 kubelet[1900]: I0513 00:33:57.309696 1900 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-net-dir\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309732 kubelet[1900]: I0513 00:33:57.309730 1900 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-policysync\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309961 kubelet[1900]: I0513 00:33:57.309755 1900 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-zghsh\" (UniqueName: \"kubernetes.io/projected/e966b341-3de5-4e67-9f8b-231e82f2bd6b-kube-api-access-zghsh\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309961 kubelet[1900]: I0513 00:33:57.309768 1900 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e966b341-3de5-4e67-9f8b-231e82f2bd6b-node-certs\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309961 kubelet[1900]: I0513 00:33:57.309777 1900 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-xtables-lock\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309961 kubelet[1900]: I0513 00:33:57.309786 1900 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-bin-dir\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309961 kubelet[1900]: I0513 00:33:57.309794 1900 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-lib-modules\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309961 kubelet[1900]: I0513 00:33:57.309803 1900 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-var-run-calico\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309961 kubelet[1900]: I0513 00:33:57.309812 1900 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-flexvol-driver-host\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.309961 kubelet[1900]: I0513 00:33:57.309821 1900 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-cni-log-dir\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.310136 kubelet[1900]: I0513 00:33:57.309829 1900 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e966b341-3de5-4e67-9f8b-231e82f2bd6b-var-lib-calico\") on node \"10.0.0.113\" DevicePath \"\"" May 13 00:33:57.364232 kubelet[1900]: E0513 00:33:57.363976 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:57.364501 containerd[1556]: time="2025-05-13T00:33:57.364415793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ksr2p,Uid:397f22b8-3a3d-4074-a31e-ee4585410ef7,Namespace:calico-system,Attempt:0,}" May 13 00:33:57.380995 containerd[1556]: time="2025-05-13T00:33:57.380900318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 13 00:33:57.380995 containerd[1556]: time="2025-05-13T00:33:57.380955878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 13 00:33:57.380995 containerd[1556]: time="2025-05-13T00:33:57.380967038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:57.381251 containerd[1556]: time="2025-05-13T00:33:57.381056759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 13 00:33:57.390785 kubelet[1900]: E0513 00:33:57.390739 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:57.411289 containerd[1556]: time="2025-05-13T00:33:57.411253961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ksr2p,Uid:397f22b8-3a3d-4074-a31e-ee4585410ef7,Namespace:calico-system,Attempt:0,} returns sandbox id \"32e4fe974c9fc8b7902585ab04d67f8e720ce01fa6688d65dd8b5c170aa44403\"" May 13 00:33:57.412629 kubelet[1900]: E0513 00:33:57.412226 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:57.414589 containerd[1556]: time="2025-05-13T00:33:57.414555730Z" level=info msg="CreateContainer within sandbox \"32e4fe974c9fc8b7902585ab04d67f8e720ce01fa6688d65dd8b5c170aa44403\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 13 00:33:57.426976 containerd[1556]: time="2025-05-13T00:33:57.426938923Z" level=info msg="CreateContainer within sandbox \"32e4fe974c9fc8b7902585ab04d67f8e720ce01fa6688d65dd8b5c170aa44403\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cc98e544f7b224f0e6d771d80378f4606a3ddd5fa4cdae6e17566d030a6458f2\"" May 13 00:33:57.428650 containerd[1556]: time="2025-05-13T00:33:57.427684645Z" level=info msg="StartContainer for \"cc98e544f7b224f0e6d771d80378f4606a3ddd5fa4cdae6e17566d030a6458f2\"" May 13 00:33:57.480075 containerd[1556]: time="2025-05-13T00:33:57.479973747Z" level=info msg="StartContainer for \"cc98e544f7b224f0e6d771d80378f4606a3ddd5fa4cdae6e17566d030a6458f2\" returns successfully" May 13 00:33:57.548505 containerd[1556]: time="2025-05-13T00:33:57.548378213Z" level=info msg="shim disconnected" id=cc98e544f7b224f0e6d771d80378f4606a3ddd5fa4cdae6e17566d030a6458f2 namespace=k8s.io May 13 00:33:57.548505 containerd[1556]: time="2025-05-13T00:33:57.548434853Z" level=warning msg="cleaning up after shim disconnected" id=cc98e544f7b224f0e6d771d80378f4606a3ddd5fa4cdae6e17566d030a6458f2 namespace=k8s.io May 13 00:33:57.548505 containerd[1556]: time="2025-05-13T00:33:57.548444493Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:33:57.658102 kubelet[1900]: E0513 00:33:57.658053 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:57.661919 containerd[1556]: time="2025-05-13T00:33:57.661879962Z" level=info msg="CreateContainer within sandbox \"32e4fe974c9fc8b7902585ab04d67f8e720ce01fa6688d65dd8b5c170aa44403\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 13 00:33:57.668594 kubelet[1900]: I0513 00:33:57.668553 1900 scope.go:117] "RemoveContainer" containerID="8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685" May 13 00:33:57.669807 containerd[1556]: time="2025-05-13T00:33:57.669769063Z" level=info msg="RemoveContainer for \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\"" May 13 00:33:57.679996 containerd[1556]: time="2025-05-13T00:33:57.679958251Z" level=info msg="RemoveContainer for \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\" returns successfully" May 13 00:33:57.680250 kubelet[1900]: I0513 00:33:57.680223 1900 scope.go:117] "RemoveContainer" containerID="2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03" May 13 00:33:57.681465 containerd[1556]: time="2025-05-13T00:33:57.681348894Z" level=info msg="RemoveContainer for \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\"" May 13 00:33:57.682497 containerd[1556]: time="2025-05-13T00:33:57.682448537Z" level=info msg="CreateContainer within sandbox \"32e4fe974c9fc8b7902585ab04d67f8e720ce01fa6688d65dd8b5c170aa44403\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7bd30a18725d4655fd3ca65039e3a79e25457724fbd8c748174734a314feea8d\"" May 13 00:33:57.682974 containerd[1556]: time="2025-05-13T00:33:57.682951299Z" level=info msg="StartContainer for \"7bd30a18725d4655fd3ca65039e3a79e25457724fbd8c748174734a314feea8d\"" May 13 00:33:57.689739 containerd[1556]: time="2025-05-13T00:33:57.689357916Z" level=info msg="RemoveContainer for \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\" returns successfully" May 13 00:33:57.689836 kubelet[1900]: I0513 00:33:57.689713 1900 scope.go:117] "RemoveContainer" containerID="0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75" May 13 00:33:57.690967 containerd[1556]: time="2025-05-13T00:33:57.690936321Z" level=info msg="RemoveContainer for \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\"" May 13 00:33:57.694283 containerd[1556]: time="2025-05-13T00:33:57.694243770Z" level=info msg="RemoveContainer for \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\" returns successfully" May 13 00:33:57.694627 kubelet[1900]: I0513 00:33:57.694582 1900 scope.go:117] "RemoveContainer" containerID="8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685" May 13 00:33:57.695186 containerd[1556]: time="2025-05-13T00:33:57.695145652Z" level=error msg="ContainerStatus for \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\": not found" May 13 00:33:57.697187 kubelet[1900]: E0513 00:33:57.697154 1900 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\": not found" containerID="8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685" May 13 00:33:57.697250 kubelet[1900]: I0513 00:33:57.697199 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685"} err="failed to get container status \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d0b437b26b1361cfb63c799bf17cda31549e1c3540186827167ca60087a8685\": not found" May 13 00:33:57.697250 kubelet[1900]: I0513 00:33:57.697228 1900 scope.go:117] "RemoveContainer" containerID="2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03" May 13 00:33:57.697491 containerd[1556]: time="2025-05-13T00:33:57.697457938Z" level=error msg="ContainerStatus for \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\": not found" May 13 00:33:57.697687 kubelet[1900]: E0513 00:33:57.697590 1900 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\": not found" containerID="2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03" May 13 00:33:57.697731 kubelet[1900]: I0513 00:33:57.697691 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03"} err="failed to get container status \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\": rpc error: code = NotFound desc = an error occurred when try to find container \"2856960ec97ac231301582d12f1bc9f138c53fd6cb9e7c864eb151ce65337b03\": not found" May 13 00:33:57.697731 kubelet[1900]: I0513 00:33:57.697708 1900 scope.go:117] "RemoveContainer" containerID="0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75" May 13 00:33:57.698000 containerd[1556]: time="2025-05-13T00:33:57.697972820Z" level=error msg="ContainerStatus for \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\": not found" May 13 00:33:57.698154 kubelet[1900]: E0513 00:33:57.698133 1900 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\": not found" containerID="0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75" May 13 00:33:57.698206 kubelet[1900]: I0513 00:33:57.698167 1900 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75"} err="failed to get container status \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a50f8d77b494ee39a6a82e08f08fe9f0d5fff782227bf8ccf4a24a38d2c7c75\": not found" May 13 00:33:57.730917 systemd[1]: var-lib-kubelet-pods-e966b341\x2d3de5\x2d4e67\x2d9f8b\x2d231e82f2bd6b-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. May 13 00:33:57.731071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8-rootfs.mount: Deactivated successfully. May 13 00:33:57.731150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-619de8f5eaacb21d068209b6bcb34be94dcf93586d926a617d46794a40f36ae8-shm.mount: Deactivated successfully. May 13 00:33:57.731230 systemd[1]: var-lib-kubelet-pods-e966b341\x2d3de5\x2d4e67\x2d9f8b\x2d231e82f2bd6b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzghsh.mount: Deactivated successfully. May 13 00:33:57.731316 systemd[1]: var-lib-kubelet-pods-e966b341\x2d3de5\x2d4e67\x2d9f8b\x2d231e82f2bd6b-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. May 13 00:33:57.749861 containerd[1556]: time="2025-05-13T00:33:57.749728240Z" level=info msg="StartContainer for \"7bd30a18725d4655fd3ca65039e3a79e25457724fbd8c748174734a314feea8d\" returns successfully" May 13 00:33:58.201780 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bd30a18725d4655fd3ca65039e3a79e25457724fbd8c748174734a314feea8d-rootfs.mount: Deactivated successfully. May 13 00:33:58.213653 containerd[1556]: time="2025-05-13T00:33:58.213497664Z" level=info msg="shim disconnected" id=7bd30a18725d4655fd3ca65039e3a79e25457724fbd8c748174734a314feea8d namespace=k8s.io May 13 00:33:58.213653 containerd[1556]: time="2025-05-13T00:33:58.213640064Z" level=warning msg="cleaning up after shim disconnected" id=7bd30a18725d4655fd3ca65039e3a79e25457724fbd8c748174734a314feea8d namespace=k8s.io May 13 00:33:58.213653 containerd[1556]: time="2025-05-13T00:33:58.213651065Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 13 00:33:58.224560 containerd[1556]: time="2025-05-13T00:33:58.224505012Z" level=warning msg="cleanup warnings time=\"2025-05-13T00:33:58Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io May 13 00:33:58.360904 kubelet[1900]: E0513 00:33:58.360870 1900 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:58.391580 kubelet[1900]: E0513 00:33:58.391522 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:58.427674 containerd[1556]: time="2025-05-13T00:33:58.427615769Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:58.428092 containerd[1556]: time="2025-05-13T00:33:58.428048851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.3: active requests=0, bytes read=28370571" May 13 00:33:58.428910 containerd[1556]: time="2025-05-13T00:33:58.428879253Z" level=info msg="ImageCreate event name:\"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:58.430841 containerd[1556]: time="2025-05-13T00:33:58.430807178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:58.432146 containerd[1556]: time="2025-05-13T00:33:58.432106461Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.3\" with image id \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f5516aa6a78f00931d2625f3012dcf2c69d141ce41483b8d59c6ec6330a18620\", size \"29739745\" in 1.284101335s" May 13 00:33:58.432186 containerd[1556]: time="2025-05-13T00:33:58.432146021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.3\" returns image reference \"sha256:26e730979a07ea7452715da6ac48076016018bc982c06ebd32d5e095f42d3d54\"" May 13 00:33:58.433984 containerd[1556]: time="2025-05-13T00:33:58.433819345Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 13 00:33:58.440630 containerd[1556]: time="2025-05-13T00:33:58.440138761Z" level=info msg="CreateContainer within sandbox \"5632f6398384396fe2e45f2c6c75a7551bd81b23817b973462cbf046d9e1cb41\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 13 00:33:58.448956 containerd[1556]: time="2025-05-13T00:33:58.448906264Z" level=info msg="CreateContainer within sandbox \"5632f6398384396fe2e45f2c6c75a7551bd81b23817b973462cbf046d9e1cb41\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"00d5bd38ebacb945bfc1215cef40253e8c7c064bfd69d2ad2beb53503cbfbbea\"" May 13 00:33:58.450482 containerd[1556]: time="2025-05-13T00:33:58.449424945Z" level=info msg="StartContainer for \"00d5bd38ebacb945bfc1215cef40253e8c7c064bfd69d2ad2beb53503cbfbbea\"" May 13 00:33:58.505312 containerd[1556]: time="2025-05-13T00:33:58.504034044Z" level=info msg="StartContainer for \"00d5bd38ebacb945bfc1215cef40253e8c7c064bfd69d2ad2beb53503cbfbbea\" returns successfully" May 13 00:33:58.520189 kubelet[1900]: I0513 00:33:58.520158 1900 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e966b341-3de5-4e67-9f8b-231e82f2bd6b" path="/var/lib/kubelet/pods/e966b341-3de5-4e67-9f8b-231e82f2bd6b/volumes" May 13 00:33:58.671690 systemd-networkd[1237]: cali5ec59c6bf6e: Gained IPv6LL May 13 00:33:58.676291 kubelet[1900]: E0513 00:33:58.675759 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:58.679835 kubelet[1900]: E0513 00:33:58.679801 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:58.687115 containerd[1556]: time="2025-05-13T00:33:58.686923670Z" level=info msg="CreateContainer within sandbox \"32e4fe974c9fc8b7902585ab04d67f8e720ce01fa6688d65dd8b5c170aa44403\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 13 00:33:58.698987 kubelet[1900]: I0513 00:33:58.698935 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6b4fb85cbb-kqtwc" podStartSLOduration=1.413858042 podStartE2EDuration="2.6989169s" podCreationTimestamp="2025-05-13 00:33:56 +0000 UTC" firstStartedPulling="2025-05-13 00:33:57.147762045 +0000 UTC m=+39.806945004" lastFinishedPulling="2025-05-13 00:33:58.432820903 +0000 UTC m=+41.092003862" observedRunningTime="2025-05-13 00:33:58.68302042 +0000 UTC m=+41.342203379" watchObservedRunningTime="2025-05-13 00:33:58.6989169 +0000 UTC m=+41.358099819" May 13 00:33:58.749381 containerd[1556]: time="2025-05-13T00:33:58.749332749Z" level=info msg="CreateContainer within sandbox \"32e4fe974c9fc8b7902585ab04d67f8e720ce01fa6688d65dd8b5c170aa44403\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"699b0b4e9f8e5b28edc24e689427580c32ee28c089a99d4604ca2d61e0eb517d\"" May 13 00:33:58.749900 containerd[1556]: time="2025-05-13T00:33:58.749754470Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 00:33:58.750705 containerd[1556]: time="2025-05-13T00:33:58.750106511Z" level=info msg="StartContainer for \"699b0b4e9f8e5b28edc24e689427580c32ee28c089a99d4604ca2d61e0eb517d\"" May 13 00:33:58.751294 containerd[1556]: time="2025-05-13T00:33:58.751270154Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 13 00:33:58.754403 containerd[1556]: time="2025-05-13T00:33:58.754354322Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"69948737\" in 320.492337ms" May 13 00:33:58.754506 containerd[1556]: time="2025-05-13T00:33:58.754489002Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 13 00:33:58.758571 containerd[1556]: time="2025-05-13T00:33:58.758132611Z" level=info msg="CreateContainer within sandbox \"a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 13 00:33:58.769884 containerd[1556]: time="2025-05-13T00:33:58.769842681Z" level=info msg="CreateContainer within sandbox \"a09aaa4b17d77f04d789638da3612903173a0651f3d0c4a9f5a9accea6fed642\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b17696e407d0d445d9a209ce62327b8f98d80d7bb45a3171b85824e71d85ec73\"" May 13 00:33:58.770381 containerd[1556]: time="2025-05-13T00:33:58.770349962Z" level=info msg="StartContainer for \"b17696e407d0d445d9a209ce62327b8f98d80d7bb45a3171b85824e71d85ec73\"" May 13 00:33:58.808619 containerd[1556]: time="2025-05-13T00:33:58.808570300Z" level=info msg="StartContainer for \"699b0b4e9f8e5b28edc24e689427580c32ee28c089a99d4604ca2d61e0eb517d\" returns successfully" May 13 00:33:58.832578 containerd[1556]: time="2025-05-13T00:33:58.832466361Z" level=info msg="StartContainer for \"b17696e407d0d445d9a209ce62327b8f98d80d7bb45a3171b85824e71d85ec73\" returns successfully" May 13 00:33:59.392504 kubelet[1900]: E0513 00:33:59.392474 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:33:59.682166 kubelet[1900]: E0513 00:33:59.681175 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:59.682166 kubelet[1900]: E0513 00:33:59.682033 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:33:59.688086 kubelet[1900]: I0513 00:33:59.688026 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.228455033 podStartE2EDuration="16.68801047s" podCreationTimestamp="2025-05-13 00:33:43 +0000 UTC" firstStartedPulling="2025-05-13 00:33:57.295502326 +0000 UTC m=+39.954685285" lastFinishedPulling="2025-05-13 00:33:58.755057763 +0000 UTC m=+41.414240722" observedRunningTime="2025-05-13 00:33:59.68772147 +0000 UTC m=+42.346904469" watchObservedRunningTime="2025-05-13 00:33:59.68801047 +0000 UTC m=+42.347193429" May 13 00:33:59.716135 kubelet[1900]: I0513 00:33:59.715905 1900 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ksr2p" podStartSLOduration=2.715887377 podStartE2EDuration="2.715887377s" podCreationTimestamp="2025-05-13 00:33:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-13 00:33:59.713975572 +0000 UTC m=+42.373158531" watchObservedRunningTime="2025-05-13 00:33:59.715887377 +0000 UTC m=+42.375070296" May 13 00:34:00.393005 kubelet[1900]: E0513 00:34:00.392965 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:34:00.684162 kubelet[1900]: E0513 00:34:00.682949 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:34:00.684162 kubelet[1900]: E0513 00:34:00.683056 1900 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 13 00:34:01.393319 kubelet[1900]: E0513 00:34:01.393270 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:34:02.394340 kubelet[1900]: E0513 00:34:02.394262 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:34:03.395293 kubelet[1900]: E0513 00:34:03.395239 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:34:04.396372 kubelet[1900]: E0513 00:34:04.396324 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 13 00:34:05.396714 kubelet[1900]: E0513 00:34:05.396652 1900 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"