Dec 13 01:27:37.352345 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:27:37.352371 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:27:37.352379 kernel: KASLR enabled Dec 13 01:27:37.352385 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:27:37.352393 kernel: printk: bootconsole [pl11] enabled Dec 13 01:27:37.352398 kernel: efi: EFI v2.7 by EDK II Dec 13 01:27:37.352405 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3eae7718 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:27:37.352412 kernel: random: crng init done Dec 13 01:27:37.352418 kernel: ACPI: Early table checksum verification disabled Dec 13 01:27:37.352423 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:27:37.352430 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352436 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352443 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:27:37.352449 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352456 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352463 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352469 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352477 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352484 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352490 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:27:37.352496 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352503 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:27:37.352509 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:27:37.352515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:27:37.352521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:27:37.352528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:27:37.352534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:27:37.352540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:27:37.352548 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:27:37.352554 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:27:37.352561 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:27:37.352567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:27:37.352573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:27:37.352579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:27:37.352586 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] Dec 13 01:27:37.352592 kernel: Zone ranges: Dec 13 01:27:37.352598 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:27:37.352604 kernel: DMA32 empty Dec 13 01:27:37.352610 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:27:37.352617 kernel: Movable zone start for each node Dec 13 01:27:37.352628 kernel: Early memory node ranges Dec 13 01:27:37.352635 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:27:37.352641 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:27:37.352648 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:27:37.352655 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:27:37.352663 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:27:37.352669 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:27:37.352676 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:27:37.352683 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:27:37.352690 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:27:37.352696 kernel: psci: probing for conduit method from ACPI. Dec 13 01:27:37.352703 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:27:37.352710 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:27:37.352716 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:27:37.352723 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:27:37.352730 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:27:37.352736 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:27:37.352745 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:27:37.352751 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:27:37.352758 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:27:37.352764 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:27:37.352771 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:27:37.352778 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:27:37.352784 kernel: CPU features: detected: Spectre-BHB Dec 13 01:27:37.352791 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:27:37.352798 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:27:37.352804 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:27:37.352811 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:27:37.352819 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:27:37.352826 kernel: alternatives: applying boot alternatives Dec 13 01:27:37.352834 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:27:37.352842 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:27:37.352848 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:27:37.352855 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:27:37.352862 kernel: Fallback order for Node 0: 0 Dec 13 01:27:37.352869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:27:37.352876 kernel: Policy zone: Normal Dec 13 01:27:37.352882 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:27:37.352889 kernel: software IO TLB: area num 2. Dec 13 01:27:37.352897 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:27:37.352904 kernel: Memory: 3982760K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211400K reserved, 0K cma-reserved) Dec 13 01:27:37.352911 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:27:37.352918 kernel: trace event string verifier disabled Dec 13 01:27:37.352924 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:27:37.352932 kernel: rcu: RCU event tracing is enabled. Dec 13 01:27:37.352938 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:27:37.352945 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:27:37.352952 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:27:37.352959 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:27:37.352966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:27:37.352974 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:27:37.352980 kernel: GICv3: 960 SPIs implemented Dec 13 01:27:37.352987 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:27:37.352993 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:27:37.353000 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:27:37.353007 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:27:37.353014 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:27:37.353021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:27:37.353027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:27:37.353034 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:27:37.353041 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:27:37.353048 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:27:37.353057 kernel: Console: colour dummy device 80x25 Dec 13 01:27:37.353064 kernel: printk: console [tty1] enabled Dec 13 01:27:37.353071 kernel: ACPI: Core revision 20230628 Dec 13 01:27:37.353078 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:27:37.353085 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:27:37.353092 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:27:37.353099 kernel: landlock: Up and running. Dec 13 01:27:37.353105 kernel: SELinux: Initializing. Dec 13 01:27:37.353112 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:37.353121 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:37.353128 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:27:37.353135 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:27:37.353142 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:27:37.353149 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:27:37.353156 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:27:37.353163 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:27:37.353177 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:27:37.353184 kernel: Remapping and enabling EFI services. Dec 13 01:27:37.353191 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:27:37.353198 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:27:37.353207 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:27:37.353215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:27:37.353222 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:27:37.353229 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:27:37.353236 kernel: SMP: Total of 2 processors activated. Dec 13 01:27:37.353244 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:27:37.353252 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:27:37.353260 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:27:37.353267 kernel: CPU features: detected: CRC32 instructions Dec 13 01:27:37.353275 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:27:37.353282 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:27:37.353290 kernel: CPU features: detected: Privileged Access Never Dec 13 01:27:37.353297 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:27:37.353304 kernel: alternatives: applying system-wide alternatives Dec 13 01:27:37.353311 kernel: devtmpfs: initialized Dec 13 01:27:37.358457 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:27:37.358482 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:27:37.358490 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:27:37.358497 kernel: SMBIOS 3.1.0 present. Dec 13 01:27:37.358505 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:27:37.358513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:27:37.358521 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:27:37.358529 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:27:37.358542 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:27:37.358549 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:27:37.358557 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:27:37.358564 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:27:37.358572 kernel: cpuidle: using governor menu Dec 13 01:27:37.358579 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:27:37.358586 kernel: ASID allocator initialised with 32768 entries Dec 13 01:27:37.358594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:27:37.358601 kernel: Serial: AMBA PL011 UART driver Dec 13 01:27:37.358611 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:27:37.358618 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:27:37.358625 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:27:37.358633 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:27:37.358640 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:27:37.358648 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:27:37.358655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:27:37.358662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:27:37.358670 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:27:37.358678 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:27:37.358686 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:27:37.358693 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:27:37.358700 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:27:37.358708 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:27:37.358715 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:27:37.358722 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:27:37.358730 kernel: ACPI: Interpreter enabled Dec 13 01:27:37.358737 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:27:37.358745 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:27:37.358754 kernel: printk: console [ttyAMA0] enabled Dec 13 01:27:37.358762 kernel: printk: bootconsole [pl11] disabled Dec 13 01:27:37.358769 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:27:37.358776 kernel: iommu: Default domain type: Translated Dec 13 01:27:37.358784 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:27:37.358791 kernel: efivars: Registered efivars operations Dec 13 01:27:37.358798 kernel: vgaarb: loaded Dec 13 01:27:37.358805 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:27:37.358813 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:27:37.358822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:27:37.358829 kernel: pnp: PnP ACPI init Dec 13 01:27:37.358836 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:27:37.358843 kernel: NET: Registered PF_INET protocol family Dec 13 01:27:37.358851 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:27:37.358858 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:27:37.358866 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:27:37.358873 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:27:37.358882 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:27:37.358890 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:27:37.358897 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:37.358905 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:37.358912 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:27:37.358920 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:27:37.358928 kernel: kvm [1]: HYP mode not available Dec 13 01:27:37.358935 kernel: Initialise system trusted keyrings Dec 13 01:27:37.358942 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:27:37.358951 kernel: Key type asymmetric registered Dec 13 01:27:37.358958 kernel: Asymmetric key parser 'x509' registered Dec 13 01:27:37.358966 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:27:37.358973 kernel: io scheduler mq-deadline registered Dec 13 01:27:37.358980 kernel: io scheduler kyber registered Dec 13 01:27:37.358988 kernel: io scheduler bfq registered Dec 13 01:27:37.358995 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:27:37.359002 kernel: thunder_xcv, ver 1.0 Dec 13 01:27:37.359010 kernel: thunder_bgx, ver 1.0 Dec 13 01:27:37.359017 kernel: nicpf, ver 1.0 Dec 13 01:27:37.359026 kernel: nicvf, ver 1.0 Dec 13 01:27:37.359205 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:27:37.359282 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:27:36 UTC (1734053256) Dec 13 01:27:37.359293 kernel: efifb: probing for efifb Dec 13 01:27:37.359300 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:27:37.359308 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:27:37.359315 kernel: efifb: scrolling: redraw Dec 13 01:27:37.359355 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:27:37.359364 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:27:37.359372 kernel: fb0: EFI VGA frame buffer device Dec 13 01:27:37.359379 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:27:37.359387 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:27:37.359394 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:27:37.359401 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:27:37.359408 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:27:37.359416 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:27:37.359425 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:27:37.359433 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:27:37.359440 kernel: Segment Routing with IPv6 Dec 13 01:27:37.359447 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:27:37.359455 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:27:37.359462 kernel: Key type dns_resolver registered Dec 13 01:27:37.359469 kernel: registered taskstats version 1 Dec 13 01:27:37.359477 kernel: Loading compiled-in X.509 certificates Dec 13 01:27:37.359484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:27:37.359492 kernel: Key type .fscrypt registered Dec 13 01:27:37.359510 kernel: Key type fscrypt-provisioning registered Dec 13 01:27:37.359518 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:27:37.359525 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:27:37.359532 kernel: ima: No architecture policies found Dec 13 01:27:37.359540 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:27:37.359547 kernel: clk: Disabling unused clocks Dec 13 01:27:37.359555 kernel: Freeing unused kernel memory: 39360K Dec 13 01:27:37.359562 kernel: Run /init as init process Dec 13 01:27:37.359571 kernel: with arguments: Dec 13 01:27:37.359578 kernel: /init Dec 13 01:27:37.359585 kernel: with environment: Dec 13 01:27:37.359592 kernel: HOME=/ Dec 13 01:27:37.359599 kernel: TERM=linux Dec 13 01:27:37.359607 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:27:37.359617 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:27:37.359626 systemd[1]: Detected virtualization microsoft. Dec 13 01:27:37.359636 systemd[1]: Detected architecture arm64. Dec 13 01:27:37.359644 systemd[1]: Running in initrd. Dec 13 01:27:37.359651 systemd[1]: No hostname configured, using default hostname. Dec 13 01:27:37.359659 systemd[1]: Hostname set to . Dec 13 01:27:37.359667 systemd[1]: Initializing machine ID from random generator. Dec 13 01:27:37.359675 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:27:37.359683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:37.359698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:37.359709 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:27:37.359717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:27:37.359726 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:27:37.359734 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:27:37.359743 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:27:37.359751 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:27:37.359759 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:37.359769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:37.359777 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:27:37.359785 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:27:37.359793 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:27:37.359801 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:27:37.359809 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:27:37.359817 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:27:37.359825 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:27:37.359834 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:27:37.359843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:37.359850 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:37.359858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:37.359866 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:27:37.359874 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:27:37.359882 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:27:37.359890 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:27:37.359898 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:27:37.359908 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:27:37.359916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:27:37.359947 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:27:37.359967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:37.359978 systemd-journald[217]: Journal started Dec 13 01:27:37.359998 systemd-journald[217]: Runtime Journal (/run/log/journal/e8765cee9ac1405aa1b11d1a43d90565) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:27:37.360711 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:27:37.380936 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:27:37.392341 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:27:37.392499 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:27:37.410898 kernel: Bridge firewalling registered Dec 13 01:27:37.404319 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:37.409977 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:27:37.418131 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:27:37.431352 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:37.440902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:37.467674 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:37.483340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:37.505975 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:27:37.525009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:27:37.534753 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:37.551419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:37.576112 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:37.583636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:37.608598 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:27:37.617554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:27:37.639535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:27:37.670121 dracut-cmdline[250]: dracut-dracut-053 Dec 13 01:27:37.670121 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:27:37.674050 systemd-resolved[252]: Positive Trust Anchors: Dec 13 01:27:37.674060 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:27:37.674090 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:27:37.676817 systemd-resolved[252]: Defaulting to hostname 'linux'. Dec 13 01:27:37.678000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:27:37.691676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:37.815641 kernel: SCSI subsystem initialized Dec 13 01:27:37.815664 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:27:37.745164 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:37.834337 kernel: iscsi: registered transport (tcp) Dec 13 01:27:37.854147 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:27:37.854213 kernel: QLogic iSCSI HBA Driver Dec 13 01:27:37.887810 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:27:37.902556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:27:37.937598 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:27:37.937666 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:27:37.944097 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:27:37.992347 kernel: raid6: neonx8 gen() 15779 MB/s Dec 13 01:27:38.012338 kernel: raid6: neonx4 gen() 15669 MB/s Dec 13 01:27:38.032337 kernel: raid6: neonx2 gen() 13265 MB/s Dec 13 01:27:38.053335 kernel: raid6: neonx1 gen() 10498 MB/s Dec 13 01:27:38.073331 kernel: raid6: int64x8 gen() 6959 MB/s Dec 13 01:27:38.093332 kernel: raid6: int64x4 gen() 7354 MB/s Dec 13 01:27:38.114338 kernel: raid6: int64x2 gen() 6134 MB/s Dec 13 01:27:38.137595 kernel: raid6: int64x1 gen() 5062 MB/s Dec 13 01:27:38.137617 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Dec 13 01:27:38.161516 kernel: raid6: .... xor() 11936 MB/s, rmw enabled Dec 13 01:27:38.161532 kernel: raid6: using neon recovery algorithm Dec 13 01:27:38.174045 kernel: xor: measuring software checksum speed Dec 13 01:27:38.174063 kernel: 8regs : 19679 MB/sec Dec 13 01:27:38.181655 kernel: 32regs : 18847 MB/sec Dec 13 01:27:38.181666 kernel: arm64_neon : 26857 MB/sec Dec 13 01:27:38.185944 kernel: xor: using function: arm64_neon (26857 MB/sec) Dec 13 01:27:38.237350 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:27:38.247684 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:27:38.263513 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:38.286969 systemd-udevd[438]: Using default interface naming scheme 'v255'. Dec 13 01:27:38.292541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:38.309642 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:27:38.328265 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Dec 13 01:27:38.356657 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:27:38.372656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:27:38.410284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:38.431263 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:27:38.459922 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:27:38.472589 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:27:38.486727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:38.503132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:27:38.530528 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:27:38.555439 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:27:38.556940 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:27:38.584256 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:27:38.584308 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:27:38.584319 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:27:38.584114 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:38.611608 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:27:38.584231 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:38.640129 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:27:38.640153 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:27:38.640163 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:27:38.640173 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:27:38.623059 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:38.674777 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:27:38.674800 kernel: scsi host0: storvsc_host_t Dec 13 01:27:38.674968 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:27:38.664580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:38.703416 kernel: scsi host1: storvsc_host_t Dec 13 01:27:38.703624 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:27:38.664773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:38.681508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:38.712059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:38.731858 kernel: PTP clock support registered Dec 13 01:27:38.740075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:38.778805 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:27:38.778829 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:27:38.778839 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:27:38.778849 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: VF slot 1 added Dec 13 01:27:38.778982 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:27:38.778993 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:27:38.778703 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:39.334804 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:27:39.334829 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:27:39.382063 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:27:39.382117 kernel: hv_pci 59913fcc-6353-4ba5-9511-44b2d9110d56: PCI VMBus probing: Using version 0x10004 Dec 13 01:27:39.464565 kernel: hv_pci 59913fcc-6353-4ba5-9511-44b2d9110d56: PCI host bridge to bus 6353:00 Dec 13 01:27:39.464812 kernel: pci_bus 6353:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:27:39.464991 kernel: pci_bus 6353:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:27:39.465110 kernel: pci 6353:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:27:39.465283 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:27:39.465420 kernel: pci 6353:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:27:39.465543 kernel: pci 6353:00:02.0: enabling Extended Tags Dec 13 01:27:39.465710 kernel: pci 6353:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6353:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:27:39.465842 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:27:39.512606 kernel: pci_bus 6353:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:27:39.512791 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:27:39.512925 kernel: pci 6353:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:27:39.513338 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:27:39.513487 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:27:39.513616 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:27:39.513768 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:27:39.513780 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:27:39.277866 systemd-resolved[252]: Clock change detected. Flushing caches. Dec 13 01:27:39.314615 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:39.314837 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:39.385849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:39.385971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:39.401416 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:39.439013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:39.498692 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:39.554916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:39.622764 kernel: mlx5_core 6353:00:02.0: enabling device (0000 -> 0002) Dec 13 01:27:39.861948 kernel: mlx5_core 6353:00:02.0: firmware version: 16.30.1284 Dec 13 01:27:39.862113 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: VF registering: eth1 Dec 13 01:27:39.862246 kernel: mlx5_core 6353:00:02.0 eth1: joined to eth0 Dec 13 01:27:39.862341 kernel: mlx5_core 6353:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:27:39.624756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:39.879751 kernel: mlx5_core 6353:00:02.0 enP25427s1: renamed from eth1 Dec 13 01:27:40.032125 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:27:40.147707 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (484) Dec 13 01:27:40.165254 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:27:40.191521 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (487) Dec 13 01:27:40.206832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:27:40.221751 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:27:40.233653 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:27:40.269976 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:27:40.308724 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:27:40.320682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:27:41.330739 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:27:41.330798 disk-uuid[608]: The operation has completed successfully. Dec 13 01:27:41.392546 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:27:41.394721 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:27:41.431832 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:27:41.445611 sh[694]: Success Dec 13 01:27:41.475757 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:27:41.640378 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:27:41.655432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:27:41.661481 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:27:41.698571 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:27:41.698626 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:27:41.706646 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:27:41.712013 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:27:41.716383 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:27:42.016417 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:27:42.021721 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:27:42.041947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:27:42.049429 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:27:42.088247 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:42.088313 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:27:42.093024 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:27:42.114096 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:27:42.122873 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:27:42.135076 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:42.143251 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:27:42.156899 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:27:42.181242 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:27:42.199812 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:27:42.225168 systemd-networkd[878]: lo: Link UP Dec 13 01:27:42.225176 systemd-networkd[878]: lo: Gained carrier Dec 13 01:27:42.227878 systemd-networkd[878]: Enumeration completed Dec 13 01:27:42.229254 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:27:42.236055 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:42.236058 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:27:42.239751 systemd[1]: Reached target network.target - Network. Dec 13 01:27:42.325681 kernel: mlx5_core 6353:00:02.0 enP25427s1: Link up Dec 13 01:27:42.366942 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: Data path switched to VF: enP25427s1 Dec 13 01:27:42.366578 systemd-networkd[878]: enP25427s1: Link UP Dec 13 01:27:42.366697 systemd-networkd[878]: eth0: Link UP Dec 13 01:27:42.366825 systemd-networkd[878]: eth0: Gained carrier Dec 13 01:27:42.366834 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:42.391119 systemd-networkd[878]: enP25427s1: Gained carrier Dec 13 01:27:42.412704 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:27:43.127214 ignition[863]: Ignition 2.19.0 Dec 13 01:27:43.127226 ignition[863]: Stage: fetch-offline Dec 13 01:27:43.131713 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:27:43.127265 ignition[863]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:43.127273 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:43.127384 ignition[863]: parsed url from cmdline: "" Dec 13 01:27:43.127387 ignition[863]: no config URL provided Dec 13 01:27:43.127392 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:27:43.160966 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:27:43.127399 ignition[863]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:27:43.127403 ignition[863]: failed to fetch config: resource requires networking Dec 13 01:27:43.127630 ignition[863]: Ignition finished successfully Dec 13 01:27:43.186423 ignition[890]: Ignition 2.19.0 Dec 13 01:27:43.186432 ignition[890]: Stage: fetch Dec 13 01:27:43.186643 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:43.186653 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:43.186773 ignition[890]: parsed url from cmdline: "" Dec 13 01:27:43.186777 ignition[890]: no config URL provided Dec 13 01:27:43.186782 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:27:43.186790 ignition[890]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:27:43.186813 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:27:43.302601 ignition[890]: GET result: OK Dec 13 01:27:43.302684 ignition[890]: config has been read from IMDS userdata Dec 13 01:27:43.302724 ignition[890]: parsing config with SHA512: b49cd1c1b3f2e15bccdffc3a23c39b13bf34c2586c860f87586afa01de7644d155b7103f78ebbecf0022ca03ea8e0b67a3a364bc47386af7a163cbbd172bf1fb Dec 13 01:27:43.306361 unknown[890]: fetched base config from "system" Dec 13 01:27:43.306814 ignition[890]: fetch: fetch complete Dec 13 01:27:43.306367 unknown[890]: fetched base config from "system" Dec 13 01:27:43.306821 ignition[890]: fetch: fetch passed Dec 13 01:27:43.306373 unknown[890]: fetched user config from "azure" Dec 13 01:27:43.306888 ignition[890]: Ignition finished successfully Dec 13 01:27:43.312566 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:27:43.340833 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:27:43.365203 ignition[896]: Ignition 2.19.0 Dec 13 01:27:43.365217 ignition[896]: Stage: kargs Dec 13 01:27:43.371031 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:27:43.365428 ignition[896]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:43.365440 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:43.366707 ignition[896]: kargs: kargs passed Dec 13 01:27:43.366784 ignition[896]: Ignition finished successfully Dec 13 01:27:43.397986 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:27:43.421333 ignition[902]: Ignition 2.19.0 Dec 13 01:27:43.421345 ignition[902]: Stage: disks Dec 13 01:27:43.421545 ignition[902]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:43.427999 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:27:43.421556 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:43.435909 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:27:43.422738 ignition[902]: disks: disks passed Dec 13 01:27:43.447279 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:27:43.422796 ignition[902]: Ignition finished successfully Dec 13 01:27:43.459573 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:27:43.472875 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:27:43.484601 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:27:43.509964 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:27:43.582434 systemd-fsck[910]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:27:43.596167 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:27:43.614952 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:27:43.671681 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:27:43.672648 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:27:43.677727 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:27:43.718763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:27:43.725827 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:27:43.771932 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (921) Dec 13 01:27:43.771957 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:43.738009 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:27:43.795782 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:27:43.795807 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:27:43.751171 systemd-networkd[878]: enP25427s1: Gained IPv6LL Dec 13 01:27:43.751587 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:27:43.751622 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:27:43.767576 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:27:43.839442 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:27:43.795595 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:27:43.839584 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:27:44.005786 systemd-networkd[878]: eth0: Gained IPv6LL Dec 13 01:27:44.334794 coreos-metadata[923]: Dec 13 01:27:44.334 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:27:44.344830 coreos-metadata[923]: Dec 13 01:27:44.344 INFO Fetch successful Dec 13 01:27:44.350512 coreos-metadata[923]: Dec 13 01:27:44.350 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:27:44.362588 coreos-metadata[923]: Dec 13 01:27:44.361 INFO Fetch successful Dec 13 01:27:44.381868 coreos-metadata[923]: Dec 13 01:27:44.381 INFO wrote hostname ci-4081.2.1-a-c1e94b9ee1 to /sysroot/etc/hostname Dec 13 01:27:44.392716 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:27:44.542153 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:27:44.552042 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:27:44.562537 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:27:44.585335 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:27:45.456568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:27:45.471836 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:27:45.479531 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:27:45.503363 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:27:45.512242 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:45.535481 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:27:45.547211 ignition[1039]: INFO : Ignition 2.19.0 Dec 13 01:27:45.547211 ignition[1039]: INFO : Stage: mount Dec 13 01:27:45.556344 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:45.556344 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:45.556344 ignition[1039]: INFO : mount: mount passed Dec 13 01:27:45.556344 ignition[1039]: INFO : Ignition finished successfully Dec 13 01:27:45.552630 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:27:45.574920 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:27:45.597993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:27:45.630187 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1050) Dec 13 01:27:45.644129 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:45.644176 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:27:45.649311 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:27:45.657687 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:27:45.660380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:27:45.687295 ignition[1068]: INFO : Ignition 2.19.0 Dec 13 01:27:45.687295 ignition[1068]: INFO : Stage: files Dec 13 01:27:45.695365 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:45.695365 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:45.695365 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:27:45.714289 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:27:45.714289 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:27:45.766296 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:27:45.773977 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:27:45.773977 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:27:45.766735 unknown[1068]: wrote ssh authorized keys file for user: core Dec 13 01:27:45.793105 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:27:45.793105 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:27:45.919970 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:27:46.070539 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:27:46.070539 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 01:27:46.411490 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:27:46.680750 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:27:46.680750 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:27:46.704122 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: files passed Dec 13 01:27:46.718032 ignition[1068]: INFO : Ignition finished successfully Dec 13 01:27:46.719523 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:27:46.760973 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:27:46.778879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:27:46.838125 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:46.838125 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:46.801758 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:27:46.866596 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:46.801854 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:27:46.817557 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:27:46.831945 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:27:46.866974 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:27:46.906157 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:27:46.906280 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:27:46.918509 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:27:46.928945 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:27:46.943877 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:27:46.964925 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:27:46.984085 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:27:47.004953 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:27:47.023385 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:47.030982 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:47.043512 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:27:47.055673 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:27:47.055751 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:27:47.072423 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:27:47.084326 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:27:47.095251 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:27:47.105902 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:27:47.118000 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:27:47.130596 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:27:47.142528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:27:47.155221 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:27:47.167474 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:27:47.178647 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:27:47.188211 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:27:47.188290 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:27:47.203625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:47.215164 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:47.227590 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:27:47.227634 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:47.241267 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:27:47.241343 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:27:47.260765 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:27:47.260823 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:27:47.273308 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:27:47.273355 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:27:47.284034 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:27:47.284080 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:27:47.314888 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:27:47.339977 ignition[1119]: INFO : Ignition 2.19.0 Dec 13 01:27:47.339977 ignition[1119]: INFO : Stage: umount Dec 13 01:27:47.339977 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:47.339977 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:47.344364 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:27:47.393637 ignition[1119]: INFO : umount: umount passed Dec 13 01:27:47.393637 ignition[1119]: INFO : Ignition finished successfully Dec 13 01:27:47.353902 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:27:47.353983 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:47.372803 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:27:47.372867 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:27:47.385258 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:27:47.385351 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:27:47.395082 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:27:47.395578 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:27:47.395695 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:27:47.415245 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:27:47.415335 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:27:47.429942 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:27:47.430014 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:27:47.441150 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:27:47.441207 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:27:47.453529 systemd[1]: Stopped target network.target - Network. Dec 13 01:27:47.463466 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:27:47.463550 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:27:47.476524 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:27:47.486963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:27:47.492699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:47.512098 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:27:47.521993 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:27:47.532756 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:27:47.532817 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:27:47.544310 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:27:47.544354 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:27:47.550096 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:27:47.550154 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:27:47.560666 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:27:47.560717 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:27:47.572537 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:27:47.583326 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:27:47.601185 systemd-networkd[878]: eth0: DHCPv6 lease lost Dec 13 01:27:47.602264 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:27:47.602405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:27:47.610402 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:27:47.610644 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:27:47.620894 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:27:47.805248 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: Data path switched from VF: enP25427s1 Dec 13 01:27:47.620956 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:47.650888 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:27:47.660752 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:27:47.660835 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:27:47.672698 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:27:47.672756 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:47.683371 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:27:47.683420 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:47.695025 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:27:47.695072 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:47.708154 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:47.759450 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:27:47.759714 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:47.775576 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:27:47.775630 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:47.786392 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:27:47.786424 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:47.811972 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:27:47.812033 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:27:47.829780 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:27:47.829840 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:27:47.841295 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:47.841356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:47.878072 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:27:47.893836 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:27:47.893929 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:47.908600 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:27:47.908655 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:47.923051 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:27:47.923110 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:47.935724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:47.935777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:47.949599 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:27:47.949721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:27:47.960894 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:27:47.960994 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:27:48.045089 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:27:48.045239 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:27:48.054736 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:27:48.069035 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:27:48.069096 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:27:48.093934 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:27:48.108890 systemd[1]: Switching root. Dec 13 01:27:48.174853 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:27:48.174888 systemd-journald[217]: Journal stopped Dec 13 01:27:37.352345 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 01:27:37.352371 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Thu Dec 12 23:24:21 -00 2024 Dec 13 01:27:37.352379 kernel: KASLR enabled Dec 13 01:27:37.352385 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Dec 13 01:27:37.352393 kernel: printk: bootconsole [pl11] enabled Dec 13 01:27:37.352398 kernel: efi: EFI v2.7 by EDK II Dec 13 01:27:37.352405 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3eae7718 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Dec 13 01:27:37.352412 kernel: random: crng init done Dec 13 01:27:37.352418 kernel: ACPI: Early table checksum verification disabled Dec 13 01:27:37.352423 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Dec 13 01:27:37.352430 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352436 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352443 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Dec 13 01:27:37.352449 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352456 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352463 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352469 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352477 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352484 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352490 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Dec 13 01:27:37.352496 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:27:37.352503 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Dec 13 01:27:37.352509 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Dec 13 01:27:37.352515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Dec 13 01:27:37.352521 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Dec 13 01:27:37.352528 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Dec 13 01:27:37.352534 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Dec 13 01:27:37.352540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Dec 13 01:27:37.352548 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Dec 13 01:27:37.352554 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Dec 13 01:27:37.352561 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Dec 13 01:27:37.352567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Dec 13 01:27:37.352573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Dec 13 01:27:37.352579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Dec 13 01:27:37.352586 kernel: NUMA: NODE_DATA [mem 0x1bf7f0800-0x1bf7f5fff] Dec 13 01:27:37.352592 kernel: Zone ranges: Dec 13 01:27:37.352598 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Dec 13 01:27:37.352604 kernel: DMA32 empty Dec 13 01:27:37.352610 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:27:37.352617 kernel: Movable zone start for each node Dec 13 01:27:37.352628 kernel: Early memory node ranges Dec 13 01:27:37.352635 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Dec 13 01:27:37.352641 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Dec 13 01:27:37.352648 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Dec 13 01:27:37.352655 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Dec 13 01:27:37.352663 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Dec 13 01:27:37.352669 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Dec 13 01:27:37.352676 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Dec 13 01:27:37.352683 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Dec 13 01:27:37.352690 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Dec 13 01:27:37.352696 kernel: psci: probing for conduit method from ACPI. Dec 13 01:27:37.352703 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 01:27:37.352710 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 01:27:37.352716 kernel: psci: MIGRATE_INFO_TYPE not supported. Dec 13 01:27:37.352723 kernel: psci: SMC Calling Convention v1.4 Dec 13 01:27:37.352730 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Dec 13 01:27:37.352736 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Dec 13 01:27:37.352745 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Dec 13 01:27:37.352751 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Dec 13 01:27:37.352758 kernel: pcpu-alloc: [0] 0 [0] 1 Dec 13 01:27:37.352764 kernel: Detected PIPT I-cache on CPU0 Dec 13 01:27:37.352771 kernel: CPU features: detected: GIC system register CPU interface Dec 13 01:27:37.352778 kernel: CPU features: detected: Hardware dirty bit management Dec 13 01:27:37.352784 kernel: CPU features: detected: Spectre-BHB Dec 13 01:27:37.352791 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 01:27:37.352798 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 01:27:37.352804 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 01:27:37.352811 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Dec 13 01:27:37.352819 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 01:27:37.352826 kernel: alternatives: applying boot alternatives Dec 13 01:27:37.352834 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:27:37.352842 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:27:37.352848 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:27:37.352855 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 01:27:37.352862 kernel: Fallback order for Node 0: 0 Dec 13 01:27:37.352869 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Dec 13 01:27:37.352876 kernel: Policy zone: Normal Dec 13 01:27:37.352882 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:27:37.352889 kernel: software IO TLB: area num 2. Dec 13 01:27:37.352897 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Dec 13 01:27:37.352904 kernel: Memory: 3982760K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211400K reserved, 0K cma-reserved) Dec 13 01:27:37.352911 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:27:37.352918 kernel: trace event string verifier disabled Dec 13 01:27:37.352924 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:27:37.352932 kernel: rcu: RCU event tracing is enabled. Dec 13 01:27:37.352938 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:27:37.352945 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:27:37.352952 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:27:37.352959 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:27:37.352966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:27:37.352974 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 01:27:37.352980 kernel: GICv3: 960 SPIs implemented Dec 13 01:27:37.352987 kernel: GICv3: 0 Extended SPIs implemented Dec 13 01:27:37.352993 kernel: Root IRQ handler: gic_handle_irq Dec 13 01:27:37.353000 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Dec 13 01:27:37.353007 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Dec 13 01:27:37.353014 kernel: ITS: No ITS available, not enabling LPIs Dec 13 01:27:37.353021 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:27:37.353027 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:27:37.353034 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 01:27:37.353041 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 01:27:37.353048 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 01:27:37.353057 kernel: Console: colour dummy device 80x25 Dec 13 01:27:37.353064 kernel: printk: console [tty1] enabled Dec 13 01:27:37.353071 kernel: ACPI: Core revision 20230628 Dec 13 01:27:37.353078 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 01:27:37.353085 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:27:37.353092 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:27:37.353099 kernel: landlock: Up and running. Dec 13 01:27:37.353105 kernel: SELinux: Initializing. Dec 13 01:27:37.353112 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:37.353121 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 01:27:37.353128 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:27:37.353135 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:27:37.353142 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Dec 13 01:27:37.353149 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Dec 13 01:27:37.353156 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:27:37.353163 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:27:37.353177 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:27:37.353184 kernel: Remapping and enabling EFI services. Dec 13 01:27:37.353191 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:27:37.353198 kernel: Detected PIPT I-cache on CPU1 Dec 13 01:27:37.353207 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Dec 13 01:27:37.353215 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 01:27:37.353222 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 01:27:37.353229 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:27:37.353236 kernel: SMP: Total of 2 processors activated. Dec 13 01:27:37.353244 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 01:27:37.353252 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Dec 13 01:27:37.353260 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 01:27:37.353267 kernel: CPU features: detected: CRC32 instructions Dec 13 01:27:37.353275 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 01:27:37.353282 kernel: CPU features: detected: LSE atomic instructions Dec 13 01:27:37.353290 kernel: CPU features: detected: Privileged Access Never Dec 13 01:27:37.353297 kernel: CPU: All CPU(s) started at EL1 Dec 13 01:27:37.353304 kernel: alternatives: applying system-wide alternatives Dec 13 01:27:37.353311 kernel: devtmpfs: initialized Dec 13 01:27:37.358457 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:27:37.358482 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:27:37.358490 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:27:37.358497 kernel: SMBIOS 3.1.0 present. Dec 13 01:27:37.358505 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Dec 13 01:27:37.358513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:27:37.358521 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 01:27:37.358529 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 01:27:37.358542 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 01:27:37.358549 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:27:37.358557 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Dec 13 01:27:37.358564 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:27:37.358572 kernel: cpuidle: using governor menu Dec 13 01:27:37.358579 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 01:27:37.358586 kernel: ASID allocator initialised with 32768 entries Dec 13 01:27:37.358594 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:27:37.358601 kernel: Serial: AMBA PL011 UART driver Dec 13 01:27:37.358611 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Dec 13 01:27:37.358618 kernel: Modules: 0 pages in range for non-PLT usage Dec 13 01:27:37.358625 kernel: Modules: 509040 pages in range for PLT usage Dec 13 01:27:37.358633 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:27:37.358640 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:27:37.358648 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 01:27:37.358655 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Dec 13 01:27:37.358662 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:27:37.358670 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:27:37.358678 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 01:27:37.358686 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Dec 13 01:27:37.358693 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:27:37.358700 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:27:37.358708 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:27:37.358715 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:27:37.358722 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:27:37.358730 kernel: ACPI: Interpreter enabled Dec 13 01:27:37.358737 kernel: ACPI: Using GIC for interrupt routing Dec 13 01:27:37.358745 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Dec 13 01:27:37.358754 kernel: printk: console [ttyAMA0] enabled Dec 13 01:27:37.358762 kernel: printk: bootconsole [pl11] disabled Dec 13 01:27:37.358769 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Dec 13 01:27:37.358776 kernel: iommu: Default domain type: Translated Dec 13 01:27:37.358784 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 01:27:37.358791 kernel: efivars: Registered efivars operations Dec 13 01:27:37.358798 kernel: vgaarb: loaded Dec 13 01:27:37.358805 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 01:27:37.358813 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:27:37.358822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:27:37.358829 kernel: pnp: PnP ACPI init Dec 13 01:27:37.358836 kernel: pnp: PnP ACPI: found 0 devices Dec 13 01:27:37.358843 kernel: NET: Registered PF_INET protocol family Dec 13 01:27:37.358851 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:27:37.358858 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 01:27:37.358866 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:27:37.358873 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 01:27:37.358882 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 01:27:37.358890 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 01:27:37.358897 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:37.358905 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 01:27:37.358912 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:27:37.358920 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:27:37.358928 kernel: kvm [1]: HYP mode not available Dec 13 01:27:37.358935 kernel: Initialise system trusted keyrings Dec 13 01:27:37.358942 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 01:27:37.358951 kernel: Key type asymmetric registered Dec 13 01:27:37.358958 kernel: Asymmetric key parser 'x509' registered Dec 13 01:27:37.358966 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Dec 13 01:27:37.358973 kernel: io scheduler mq-deadline registered Dec 13 01:27:37.358980 kernel: io scheduler kyber registered Dec 13 01:27:37.358988 kernel: io scheduler bfq registered Dec 13 01:27:37.358995 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:27:37.359002 kernel: thunder_xcv, ver 1.0 Dec 13 01:27:37.359010 kernel: thunder_bgx, ver 1.0 Dec 13 01:27:37.359017 kernel: nicpf, ver 1.0 Dec 13 01:27:37.359026 kernel: nicvf, ver 1.0 Dec 13 01:27:37.359205 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 01:27:37.359282 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T01:27:36 UTC (1734053256) Dec 13 01:27:37.359293 kernel: efifb: probing for efifb Dec 13 01:27:37.359300 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:27:37.359308 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:27:37.359315 kernel: efifb: scrolling: redraw Dec 13 01:27:37.359355 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:27:37.359364 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:27:37.359372 kernel: fb0: EFI VGA frame buffer device Dec 13 01:27:37.359379 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Dec 13 01:27:37.359387 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:27:37.359394 kernel: No ACPI PMU IRQ for CPU0 Dec 13 01:27:37.359401 kernel: No ACPI PMU IRQ for CPU1 Dec 13 01:27:37.359408 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Dec 13 01:27:37.359416 kernel: watchdog: Delayed init of the lockup detector failed: -19 Dec 13 01:27:37.359425 kernel: watchdog: Hard watchdog permanently disabled Dec 13 01:27:37.359433 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:27:37.359440 kernel: Segment Routing with IPv6 Dec 13 01:27:37.359447 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:27:37.359455 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:27:37.359462 kernel: Key type dns_resolver registered Dec 13 01:27:37.359469 kernel: registered taskstats version 1 Dec 13 01:27:37.359477 kernel: Loading compiled-in X.509 certificates Dec 13 01:27:37.359484 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: d83da9ddb9e3c2439731828371f21d0232fd9ffb' Dec 13 01:27:37.359492 kernel: Key type .fscrypt registered Dec 13 01:27:37.359510 kernel: Key type fscrypt-provisioning registered Dec 13 01:27:37.359518 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:27:37.359525 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:27:37.359532 kernel: ima: No architecture policies found Dec 13 01:27:37.359540 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 01:27:37.359547 kernel: clk: Disabling unused clocks Dec 13 01:27:37.359555 kernel: Freeing unused kernel memory: 39360K Dec 13 01:27:37.359562 kernel: Run /init as init process Dec 13 01:27:37.359571 kernel: with arguments: Dec 13 01:27:37.359578 kernel: /init Dec 13 01:27:37.359585 kernel: with environment: Dec 13 01:27:37.359592 kernel: HOME=/ Dec 13 01:27:37.359599 kernel: TERM=linux Dec 13 01:27:37.359607 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:27:37.359617 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:27:37.359626 systemd[1]: Detected virtualization microsoft. Dec 13 01:27:37.359636 systemd[1]: Detected architecture arm64. Dec 13 01:27:37.359644 systemd[1]: Running in initrd. Dec 13 01:27:37.359651 systemd[1]: No hostname configured, using default hostname. Dec 13 01:27:37.359659 systemd[1]: Hostname set to . Dec 13 01:27:37.359667 systemd[1]: Initializing machine ID from random generator. Dec 13 01:27:37.359675 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:27:37.359683 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:37.359698 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:37.359709 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:27:37.359717 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:27:37.359726 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:27:37.359734 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:27:37.359743 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:27:37.359751 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:27:37.359759 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:37.359769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:37.359777 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:27:37.359785 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:27:37.359793 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:27:37.359801 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:27:37.359809 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:27:37.359817 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:27:37.359825 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:27:37.359834 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:27:37.359843 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:37.359850 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:37.359858 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:37.359866 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:27:37.359874 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:27:37.359882 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:27:37.359890 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:27:37.359898 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:27:37.359908 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:27:37.359916 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:27:37.359947 systemd-journald[217]: Collecting audit messages is disabled. Dec 13 01:27:37.359967 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:37.359978 systemd-journald[217]: Journal started Dec 13 01:27:37.359998 systemd-journald[217]: Runtime Journal (/run/log/journal/e8765cee9ac1405aa1b11d1a43d90565) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:27:37.360711 systemd-modules-load[218]: Inserted module 'overlay' Dec 13 01:27:37.380936 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:27:37.392341 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:27:37.392499 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:27:37.410898 kernel: Bridge firewalling registered Dec 13 01:27:37.404319 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:37.409977 systemd-modules-load[218]: Inserted module 'br_netfilter' Dec 13 01:27:37.418131 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:27:37.431352 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:37.440902 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:37.467674 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:37.483340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:37.505975 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:27:37.525009 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:27:37.534753 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:37.551419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:37.576112 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:37.583636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:37.608598 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:27:37.617554 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:27:37.639535 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:27:37.670121 dracut-cmdline[250]: dracut-dracut-053 Dec 13 01:27:37.670121 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=9494f75a68cfbdce95d0d2f9b58d6d75bc38ee5b4e31dfc2a6da695ffafefba6 Dec 13 01:27:37.674050 systemd-resolved[252]: Positive Trust Anchors: Dec 13 01:27:37.674060 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:27:37.674090 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:27:37.676817 systemd-resolved[252]: Defaulting to hostname 'linux'. Dec 13 01:27:37.678000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:27:37.691676 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:37.815641 kernel: SCSI subsystem initialized Dec 13 01:27:37.815664 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:27:37.745164 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:37.834337 kernel: iscsi: registered transport (tcp) Dec 13 01:27:37.854147 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:27:37.854213 kernel: QLogic iSCSI HBA Driver Dec 13 01:27:37.887810 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:27:37.902556 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:27:37.937598 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:27:37.937666 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:27:37.944097 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:27:37.992347 kernel: raid6: neonx8 gen() 15779 MB/s Dec 13 01:27:38.012338 kernel: raid6: neonx4 gen() 15669 MB/s Dec 13 01:27:38.032337 kernel: raid6: neonx2 gen() 13265 MB/s Dec 13 01:27:38.053335 kernel: raid6: neonx1 gen() 10498 MB/s Dec 13 01:27:38.073331 kernel: raid6: int64x8 gen() 6959 MB/s Dec 13 01:27:38.093332 kernel: raid6: int64x4 gen() 7354 MB/s Dec 13 01:27:38.114338 kernel: raid6: int64x2 gen() 6134 MB/s Dec 13 01:27:38.137595 kernel: raid6: int64x1 gen() 5062 MB/s Dec 13 01:27:38.137617 kernel: raid6: using algorithm neonx8 gen() 15779 MB/s Dec 13 01:27:38.161516 kernel: raid6: .... xor() 11936 MB/s, rmw enabled Dec 13 01:27:38.161532 kernel: raid6: using neon recovery algorithm Dec 13 01:27:38.174045 kernel: xor: measuring software checksum speed Dec 13 01:27:38.174063 kernel: 8regs : 19679 MB/sec Dec 13 01:27:38.181655 kernel: 32regs : 18847 MB/sec Dec 13 01:27:38.181666 kernel: arm64_neon : 26857 MB/sec Dec 13 01:27:38.185944 kernel: xor: using function: arm64_neon (26857 MB/sec) Dec 13 01:27:38.237350 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:27:38.247684 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:27:38.263513 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:38.286969 systemd-udevd[438]: Using default interface naming scheme 'v255'. Dec 13 01:27:38.292541 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:38.309642 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:27:38.328265 dracut-pre-trigger[443]: rd.md=0: removing MD RAID activation Dec 13 01:27:38.356657 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:27:38.372656 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:27:38.410284 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:38.431263 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:27:38.459922 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:27:38.472589 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:27:38.486727 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:38.503132 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:27:38.530528 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:27:38.555439 kernel: hv_vmbus: Vmbus version:5.3 Dec 13 01:27:38.556940 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:27:38.584256 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:27:38.584308 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:27:38.584319 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Dec 13 01:27:38.584114 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:38.611608 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:27:38.584231 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:38.640129 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:27:38.640153 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:27:38.640163 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:27:38.640173 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Dec 13 01:27:38.623059 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:38.674777 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:27:38.674800 kernel: scsi host0: storvsc_host_t Dec 13 01:27:38.674968 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:27:38.664580 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:38.703416 kernel: scsi host1: storvsc_host_t Dec 13 01:27:38.703624 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:27:38.664773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:38.681508 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:38.712059 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:38.731858 kernel: PTP clock support registered Dec 13 01:27:38.740075 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:38.778805 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:27:38.778829 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:27:38.778839 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:27:38.778849 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: VF slot 1 added Dec 13 01:27:38.778982 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:27:38.778993 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:27:38.778703 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:39.334804 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:27:39.334829 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:27:39.382063 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:27:39.382117 kernel: hv_pci 59913fcc-6353-4ba5-9511-44b2d9110d56: PCI VMBus probing: Using version 0x10004 Dec 13 01:27:39.464565 kernel: hv_pci 59913fcc-6353-4ba5-9511-44b2d9110d56: PCI host bridge to bus 6353:00 Dec 13 01:27:39.464812 kernel: pci_bus 6353:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Dec 13 01:27:39.464991 kernel: pci_bus 6353:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:27:39.465110 kernel: pci 6353:00:02.0: [15b3:1018] type 00 class 0x020000 Dec 13 01:27:39.465283 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:27:39.465420 kernel: pci 6353:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:27:39.465543 kernel: pci 6353:00:02.0: enabling Extended Tags Dec 13 01:27:39.465710 kernel: pci 6353:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 6353:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Dec 13 01:27:39.465842 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:27:39.512606 kernel: pci_bus 6353:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:27:39.512791 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:27:39.512925 kernel: pci 6353:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Dec 13 01:27:39.513338 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:27:39.513487 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:27:39.513616 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:27:39.513768 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:27:39.513780 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:27:39.277866 systemd-resolved[252]: Clock change detected. Flushing caches. Dec 13 01:27:39.314615 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:39.314837 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:39.385849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:39.385971 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:39.401416 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:39.439013 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:39.498692 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:39.554916 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:27:39.622764 kernel: mlx5_core 6353:00:02.0: enabling device (0000 -> 0002) Dec 13 01:27:39.861948 kernel: mlx5_core 6353:00:02.0: firmware version: 16.30.1284 Dec 13 01:27:39.862113 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: VF registering: eth1 Dec 13 01:27:39.862246 kernel: mlx5_core 6353:00:02.0 eth1: joined to eth0 Dec 13 01:27:39.862341 kernel: mlx5_core 6353:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Dec 13 01:27:39.624756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:39.879751 kernel: mlx5_core 6353:00:02.0 enP25427s1: renamed from eth1 Dec 13 01:27:40.032125 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:27:40.147707 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (484) Dec 13 01:27:40.165254 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:27:40.191521 kernel: BTRFS: device fsid 2893cd1e-612b-4262-912c-10787dc9c881 devid 1 transid 46 /dev/sda3 scanned by (udev-worker) (487) Dec 13 01:27:40.206832 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:27:40.221751 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:27:40.233653 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:27:40.269976 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:27:40.308724 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:27:40.320682 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:27:41.330739 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:27:41.330798 disk-uuid[608]: The operation has completed successfully. Dec 13 01:27:41.392546 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:27:41.394721 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:27:41.431832 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:27:41.445611 sh[694]: Success Dec 13 01:27:41.475757 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 01:27:41.640378 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:27:41.655432 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:27:41.661481 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:27:41.698571 kernel: BTRFS info (device dm-0): first mount of filesystem 2893cd1e-612b-4262-912c-10787dc9c881 Dec 13 01:27:41.698626 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:27:41.706646 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:27:41.712013 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:27:41.716383 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:27:42.016417 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:27:42.021721 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:27:42.041947 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:27:42.049429 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:27:42.088247 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:42.088313 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:27:42.093024 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:27:42.114096 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:27:42.122873 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:27:42.135076 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:42.143251 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:27:42.156899 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:27:42.181242 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:27:42.199812 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:27:42.225168 systemd-networkd[878]: lo: Link UP Dec 13 01:27:42.225176 systemd-networkd[878]: lo: Gained carrier Dec 13 01:27:42.227878 systemd-networkd[878]: Enumeration completed Dec 13 01:27:42.229254 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:27:42.236055 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:42.236058 systemd-networkd[878]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:27:42.239751 systemd[1]: Reached target network.target - Network. Dec 13 01:27:42.325681 kernel: mlx5_core 6353:00:02.0 enP25427s1: Link up Dec 13 01:27:42.366942 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: Data path switched to VF: enP25427s1 Dec 13 01:27:42.366578 systemd-networkd[878]: enP25427s1: Link UP Dec 13 01:27:42.366697 systemd-networkd[878]: eth0: Link UP Dec 13 01:27:42.366825 systemd-networkd[878]: eth0: Gained carrier Dec 13 01:27:42.366834 systemd-networkd[878]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:42.391119 systemd-networkd[878]: enP25427s1: Gained carrier Dec 13 01:27:42.412704 systemd-networkd[878]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:27:43.127214 ignition[863]: Ignition 2.19.0 Dec 13 01:27:43.127226 ignition[863]: Stage: fetch-offline Dec 13 01:27:43.131713 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:27:43.127265 ignition[863]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:43.127273 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:43.127384 ignition[863]: parsed url from cmdline: "" Dec 13 01:27:43.127387 ignition[863]: no config URL provided Dec 13 01:27:43.127392 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:27:43.160966 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:27:43.127399 ignition[863]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:27:43.127403 ignition[863]: failed to fetch config: resource requires networking Dec 13 01:27:43.127630 ignition[863]: Ignition finished successfully Dec 13 01:27:43.186423 ignition[890]: Ignition 2.19.0 Dec 13 01:27:43.186432 ignition[890]: Stage: fetch Dec 13 01:27:43.186643 ignition[890]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:43.186653 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:43.186773 ignition[890]: parsed url from cmdline: "" Dec 13 01:27:43.186777 ignition[890]: no config URL provided Dec 13 01:27:43.186782 ignition[890]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:27:43.186790 ignition[890]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:27:43.186813 ignition[890]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:27:43.302601 ignition[890]: GET result: OK Dec 13 01:27:43.302684 ignition[890]: config has been read from IMDS userdata Dec 13 01:27:43.302724 ignition[890]: parsing config with SHA512: b49cd1c1b3f2e15bccdffc3a23c39b13bf34c2586c860f87586afa01de7644d155b7103f78ebbecf0022ca03ea8e0b67a3a364bc47386af7a163cbbd172bf1fb Dec 13 01:27:43.306361 unknown[890]: fetched base config from "system" Dec 13 01:27:43.306814 ignition[890]: fetch: fetch complete Dec 13 01:27:43.306367 unknown[890]: fetched base config from "system" Dec 13 01:27:43.306821 ignition[890]: fetch: fetch passed Dec 13 01:27:43.306373 unknown[890]: fetched user config from "azure" Dec 13 01:27:43.306888 ignition[890]: Ignition finished successfully Dec 13 01:27:43.312566 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:27:43.340833 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:27:43.365203 ignition[896]: Ignition 2.19.0 Dec 13 01:27:43.365217 ignition[896]: Stage: kargs Dec 13 01:27:43.371031 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:27:43.365428 ignition[896]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:43.365440 ignition[896]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:43.366707 ignition[896]: kargs: kargs passed Dec 13 01:27:43.366784 ignition[896]: Ignition finished successfully Dec 13 01:27:43.397986 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:27:43.421333 ignition[902]: Ignition 2.19.0 Dec 13 01:27:43.421345 ignition[902]: Stage: disks Dec 13 01:27:43.421545 ignition[902]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:43.427999 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:27:43.421556 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:43.435909 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:27:43.422738 ignition[902]: disks: disks passed Dec 13 01:27:43.447279 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:27:43.422796 ignition[902]: Ignition finished successfully Dec 13 01:27:43.459573 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:27:43.472875 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:27:43.484601 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:27:43.509964 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:27:43.582434 systemd-fsck[910]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:27:43.596167 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:27:43.614952 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:27:43.671681 kernel: EXT4-fs (sda9): mounted filesystem 32632247-db8d-4541-89c0-6f68c7fa7ee3 r/w with ordered data mode. Quota mode: none. Dec 13 01:27:43.672648 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:27:43.677727 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:27:43.718763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:27:43.725827 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:27:43.771932 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (921) Dec 13 01:27:43.771957 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:43.738009 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:27:43.795782 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:27:43.795807 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:27:43.751171 systemd-networkd[878]: enP25427s1: Gained IPv6LL Dec 13 01:27:43.751587 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:27:43.751622 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:27:43.767576 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:27:43.839442 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:27:43.795595 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:27:43.839584 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:27:44.005786 systemd-networkd[878]: eth0: Gained IPv6LL Dec 13 01:27:44.334794 coreos-metadata[923]: Dec 13 01:27:44.334 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:27:44.344830 coreos-metadata[923]: Dec 13 01:27:44.344 INFO Fetch successful Dec 13 01:27:44.350512 coreos-metadata[923]: Dec 13 01:27:44.350 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:27:44.362588 coreos-metadata[923]: Dec 13 01:27:44.361 INFO Fetch successful Dec 13 01:27:44.381868 coreos-metadata[923]: Dec 13 01:27:44.381 INFO wrote hostname ci-4081.2.1-a-c1e94b9ee1 to /sysroot/etc/hostname Dec 13 01:27:44.392716 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:27:44.542153 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:27:44.552042 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:27:44.562537 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:27:44.585335 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:27:45.456568 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:27:45.471836 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:27:45.479531 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:27:45.503363 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:27:45.512242 kernel: BTRFS info (device sda6): last unmount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:45.535481 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:27:45.547211 ignition[1039]: INFO : Ignition 2.19.0 Dec 13 01:27:45.547211 ignition[1039]: INFO : Stage: mount Dec 13 01:27:45.556344 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:45.556344 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:45.556344 ignition[1039]: INFO : mount: mount passed Dec 13 01:27:45.556344 ignition[1039]: INFO : Ignition finished successfully Dec 13 01:27:45.552630 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:27:45.574920 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:27:45.597993 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:27:45.630187 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1050) Dec 13 01:27:45.644129 kernel: BTRFS info (device sda6): first mount of filesystem dbef6a22-a801-4c1e-a0cd-3fc525f899dd Dec 13 01:27:45.644176 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 01:27:45.649311 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:27:45.657687 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:27:45.660380 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:27:45.687295 ignition[1068]: INFO : Ignition 2.19.0 Dec 13 01:27:45.687295 ignition[1068]: INFO : Stage: files Dec 13 01:27:45.695365 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:45.695365 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:45.695365 ignition[1068]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:27:45.714289 ignition[1068]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:27:45.714289 ignition[1068]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:27:45.766296 ignition[1068]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:27:45.773977 ignition[1068]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:27:45.773977 ignition[1068]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:27:45.766735 unknown[1068]: wrote ssh authorized keys file for user: core Dec 13 01:27:45.793105 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:27:45.793105 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 01:27:45.919970 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Dec 13 01:27:46.070539 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 01:27:46.070539 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:27:46.092023 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Dec 13 01:27:46.411490 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Dec 13 01:27:46.680750 ignition[1068]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Dec 13 01:27:46.680750 ignition[1068]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Dec 13 01:27:46.704122 ignition[1068]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:27:46.718032 ignition[1068]: INFO : files: files passed Dec 13 01:27:46.718032 ignition[1068]: INFO : Ignition finished successfully Dec 13 01:27:46.719523 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:27:46.760973 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:27:46.778879 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:27:46.838125 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:46.838125 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:46.801758 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:27:46.866596 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:27:46.801854 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:27:46.817557 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:27:46.831945 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:27:46.866974 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:27:46.906157 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:27:46.906280 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:27:46.918509 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:27:46.928945 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:27:46.943877 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:27:46.964925 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:27:46.984085 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:27:47.004953 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:27:47.023385 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:47.030982 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:47.043512 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:27:47.055673 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:27:47.055751 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:27:47.072423 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:27:47.084326 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:27:47.095251 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:27:47.105902 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:27:47.118000 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:27:47.130596 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:27:47.142528 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:27:47.155221 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:27:47.167474 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:27:47.178647 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:27:47.188211 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:27:47.188290 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:27:47.203625 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:47.215164 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:47.227590 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:27:47.227634 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:47.241267 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:27:47.241343 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:27:47.260765 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:27:47.260823 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:27:47.273308 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:27:47.273355 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:27:47.284034 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:27:47.284080 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:27:47.314888 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:27:47.339977 ignition[1119]: INFO : Ignition 2.19.0 Dec 13 01:27:47.339977 ignition[1119]: INFO : Stage: umount Dec 13 01:27:47.339977 ignition[1119]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:27:47.339977 ignition[1119]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:27:47.344364 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:27:47.393637 ignition[1119]: INFO : umount: umount passed Dec 13 01:27:47.393637 ignition[1119]: INFO : Ignition finished successfully Dec 13 01:27:47.353902 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:27:47.353983 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:47.372803 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:27:47.372867 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:27:47.385258 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:27:47.385351 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:27:47.395082 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:27:47.395578 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:27:47.395695 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:27:47.415245 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:27:47.415335 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:27:47.429942 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:27:47.430014 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:27:47.441150 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:27:47.441207 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:27:47.453529 systemd[1]: Stopped target network.target - Network. Dec 13 01:27:47.463466 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:27:47.463550 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:27:47.476524 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:27:47.486963 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:27:47.492699 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:47.512098 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:27:47.521993 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:27:47.532756 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:27:47.532817 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:27:47.544310 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:27:47.544354 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:27:47.550096 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:27:47.550154 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:27:47.560666 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:27:47.560717 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:27:47.572537 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:27:47.583326 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:27:47.601185 systemd-networkd[878]: eth0: DHCPv6 lease lost Dec 13 01:27:47.602264 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:27:47.602405 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:27:47.610402 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:27:47.610644 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:27:47.620894 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:27:47.805248 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: Data path switched from VF: enP25427s1 Dec 13 01:27:47.620956 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:47.650888 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:27:47.660752 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:27:47.660835 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:27:47.672698 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:27:47.672756 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:47.683371 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:27:47.683420 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:47.695025 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:27:47.695072 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:47.708154 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:47.759450 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:27:47.759714 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:47.775576 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:27:47.775630 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:47.786392 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:27:47.786424 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:47.811972 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:27:47.812033 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:27:47.829780 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:27:47.829840 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:27:47.841295 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:27:47.841356 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:27:47.878072 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:27:47.893836 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:27:47.893929 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:47.908600 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:27:47.908655 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:47.923051 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:27:47.923110 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:47.935724 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:27:47.935777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:47.949599 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:27:47.949721 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:27:47.960894 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:27:47.960994 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:27:48.045089 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:27:48.045239 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:27:48.054736 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:27:48.069035 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:27:48.069096 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:27:48.093934 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:27:48.108890 systemd[1]: Switching root. Dec 13 01:27:48.174853 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Dec 13 01:27:48.174888 systemd-journald[217]: Journal stopped Dec 13 01:27:51.951388 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:27:51.951418 kernel: SELinux: policy capability open_perms=1 Dec 13 01:27:51.951428 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:27:51.951474 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:27:51.951483 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:27:51.951491 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:27:51.951500 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:27:51.951508 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:27:51.951516 kernel: audit: type=1403 audit(1734053269.020:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:27:51.951525 systemd[1]: Successfully loaded SELinux policy in 136.163ms. Dec 13 01:27:51.951537 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.481ms. Dec 13 01:27:51.951547 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:27:51.951557 systemd[1]: Detected virtualization microsoft. Dec 13 01:27:51.951566 systemd[1]: Detected architecture arm64. Dec 13 01:27:51.951615 systemd[1]: Detected first boot. Dec 13 01:27:51.951626 systemd[1]: Hostname set to . Dec 13 01:27:51.951635 systemd[1]: Initializing machine ID from random generator. Dec 13 01:27:51.951644 zram_generator::config[1160]: No configuration found. Dec 13 01:27:51.951654 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:27:51.951674 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 01:27:51.951683 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 01:27:51.951693 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 01:27:51.951730 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:27:51.951755 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:27:51.951765 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:27:51.951774 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:27:51.951783 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:27:51.951793 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:27:51.951803 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:27:51.951813 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:27:51.951823 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:27:51.951832 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:27:51.951841 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:27:51.951851 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:27:51.951860 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:27:51.951894 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:27:51.951919 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Dec 13 01:27:51.951930 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:27:51.951940 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 01:27:51.951949 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 01:27:51.951960 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 01:27:51.951970 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:27:51.951979 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:27:51.951990 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:27:51.951999 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:27:51.952010 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:27:51.952019 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:27:51.952029 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:27:51.952038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:27:51.952047 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:27:51.952057 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:27:51.952077 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:27:51.952112 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:27:51.952122 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:27:51.952131 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:27:51.952141 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:27:51.952150 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:27:51.952160 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:27:51.952171 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:27:51.952181 systemd[1]: Reached target machines.target - Containers. Dec 13 01:27:51.952191 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:27:51.952200 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:27:51.952210 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:27:51.952220 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:27:51.952230 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:27:51.952240 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:27:51.952267 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:27:51.952297 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:27:51.952307 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:27:51.952317 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:27:51.952326 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 01:27:51.952336 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 01:27:51.952345 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 01:27:51.952354 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 01:27:51.952365 kernel: ACPI: bus type drm_connector registered Dec 13 01:27:51.952397 kernel: loop: module loaded Dec 13 01:27:51.952422 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:27:51.952432 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:27:51.952441 kernel: fuse: init (API version 7.39) Dec 13 01:27:51.952450 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:27:51.952460 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:27:51.952485 systemd-journald[1263]: Collecting audit messages is disabled. Dec 13 01:27:51.952508 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:27:51.952519 systemd-journald[1263]: Journal started Dec 13 01:27:51.952539 systemd-journald[1263]: Runtime Journal (/run/log/journal/dce1d799ea184491bd7ac53d00384385) is 8.0M, max 78.5M, 70.5M free. Dec 13 01:27:50.864846 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:27:51.003168 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:27:51.003681 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 01:27:51.004086 systemd[1]: systemd-journald.service: Consumed 3.349s CPU time. Dec 13 01:27:51.979952 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 01:27:51.980183 systemd[1]: Stopped verity-setup.service. Dec 13 01:27:52.001577 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:27:52.009270 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:27:52.019127 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:27:52.028179 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:27:52.036583 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:27:52.046892 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:27:52.056322 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:27:52.066424 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:27:52.076655 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:27:52.088076 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:27:52.088304 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:27:52.097085 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:27:52.097230 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:27:52.104326 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:27:52.104500 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:27:52.110987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:27:52.111137 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:27:52.118578 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:27:52.118743 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:27:52.126011 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:27:52.126154 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:27:52.133215 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:27:52.140875 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:27:52.149693 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:27:52.157730 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:27:52.176175 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:27:52.188754 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:27:52.196427 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:27:52.203244 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:27:52.203287 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:27:52.210244 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:27:52.218779 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:27:52.228875 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:27:52.237731 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:27:52.251170 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:27:52.259160 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:27:52.266146 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:27:52.268856 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:27:52.275343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:27:52.277918 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:27:52.290987 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:27:52.300047 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:27:52.311575 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:27:52.322250 systemd-journald[1263]: Time spent on flushing to /var/log/journal/dce1d799ea184491bd7ac53d00384385 is 17.667ms for 904 entries. Dec 13 01:27:52.322250 systemd-journald[1263]: System Journal (/var/log/journal/dce1d799ea184491bd7ac53d00384385) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:27:52.361971 systemd-journald[1263]: Received client request to flush runtime journal. Dec 13 01:27:52.335870 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:27:52.346279 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:27:52.354694 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:27:52.365701 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:27:52.373180 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:27:52.389697 kernel: loop0: detected capacity change from 0 to 31320 Dec 13 01:27:52.392286 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:27:52.407206 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:27:52.414564 udevadm[1297]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 01:27:52.487613 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:27:52.488975 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:27:52.502165 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:27:52.515523 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Dec 13 01:27:52.515542 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Dec 13 01:27:52.519517 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:27:52.535824 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:27:52.653696 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:27:52.668862 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:27:52.687618 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Dec 13 01:27:52.687960 systemd-tmpfiles[1314]: ACLs are not supported, ignoring. Dec 13 01:27:52.692197 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:27:52.757684 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:27:52.823701 kernel: loop1: detected capacity change from 0 to 114432 Dec 13 01:27:53.295936 kernel: loop2: detected capacity change from 0 to 194096 Dec 13 01:27:53.350696 kernel: loop3: detected capacity change from 0 to 114328 Dec 13 01:27:53.555693 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:27:53.568826 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:27:53.588848 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Dec 13 01:27:53.670682 kernel: loop4: detected capacity change from 0 to 31320 Dec 13 01:27:53.679698 kernel: loop5: detected capacity change from 0 to 114432 Dec 13 01:27:53.688681 kernel: loop6: detected capacity change from 0 to 194096 Dec 13 01:27:53.697687 kernel: loop7: detected capacity change from 0 to 114328 Dec 13 01:27:53.701394 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 01:27:53.701837 (sd-merge)[1325]: Merged extensions into '/usr'. Dec 13 01:27:53.704134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:27:53.725414 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:27:53.725429 systemd[1]: Reloading... Dec 13 01:27:53.832731 zram_generator::config[1374]: No configuration found. Dec 13 01:27:53.839168 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:27:53.874000 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1327) Dec 13 01:27:53.912681 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1327) Dec 13 01:27:53.934697 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:27:53.945219 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:27:53.945316 kernel: hv_balloon: Memory hot add disabled on ARM64 Dec 13 01:27:53.952749 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:27:53.965493 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:27:53.965577 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:27:53.974226 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:27:53.983622 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:27:54.044886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:54.080689 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1327) Dec 13 01:27:54.121223 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Dec 13 01:27:54.121406 systemd[1]: Reloading finished in 395 ms. Dec 13 01:27:54.152252 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:27:54.185957 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:27:54.200836 systemd[1]: Starting ensure-sysext.service... Dec 13 01:27:54.209623 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:27:54.218863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:27:54.236845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:27:54.245844 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:27:54.256209 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:27:54.267982 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:27:54.280447 systemd[1]: Reloading requested from client PID 1480 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:27:54.280468 systemd[1]: Reloading... Dec 13 01:27:54.290544 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:27:54.291590 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:27:54.292321 systemd-tmpfiles[1484]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:27:54.292553 systemd-tmpfiles[1484]: ACLs are not supported, ignoring. Dec 13 01:27:54.292603 systemd-tmpfiles[1484]: ACLs are not supported, ignoring. Dec 13 01:27:54.343941 systemd-tmpfiles[1484]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:27:54.343956 systemd-tmpfiles[1484]: Skipping /boot Dec 13 01:27:54.358098 systemd-tmpfiles[1484]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:27:54.358439 systemd-tmpfiles[1484]: Skipping /boot Dec 13 01:27:54.368694 zram_generator::config[1523]: No configuration found. Dec 13 01:27:54.472239 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:27:54.549045 systemd[1]: Reloading finished in 268 ms. Dec 13 01:27:54.562489 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:27:54.582961 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:27:54.611827 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:27:54.621296 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:27:54.629443 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:27:54.645024 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:27:54.655874 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:27:54.665567 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:27:54.672176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:27:54.686655 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:27:54.688526 lvm[1585]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:27:54.693095 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:27:54.702986 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:27:54.718327 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:27:54.725423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:27:54.728804 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:27:54.728958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:27:54.740804 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:27:54.749633 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:27:54.751101 augenrules[1602]: No rules Dec 13 01:27:54.758149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:27:54.765774 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:27:54.777460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:27:54.777976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:27:54.787720 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:27:54.787873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:27:54.799707 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:27:54.817836 systemd[1]: Finished ensure-sysext.service. Dec 13 01:27:54.827968 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:27:54.836398 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:27:54.846302 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:27:54.852369 lvm[1621]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:27:54.857891 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:27:54.870880 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:27:54.880643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:27:54.889149 systemd-networkd[1482]: lo: Link UP Dec 13 01:27:54.889599 systemd-networkd[1482]: lo: Gained carrier Dec 13 01:27:54.890700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:27:54.892947 systemd-networkd[1482]: Enumeration completed Dec 13 01:27:54.893384 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:54.893459 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:27:54.900112 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:27:54.900185 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:27:54.904848 systemd-resolved[1588]: Positive Trust Anchors: Dec 13 01:27:54.905177 systemd-resolved[1588]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:27:54.905259 systemd-resolved[1588]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:27:54.918308 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:27:54.925171 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:27:54.926531 systemd-resolved[1588]: Using system hostname 'ci-4081.2.1-a-c1e94b9ee1'. Dec 13 01:27:54.932856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:27:54.933008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:27:54.939957 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:27:54.940095 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:27:54.946970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:27:54.947105 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:27:54.954262 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:27:54.954387 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:27:54.965694 kernel: mlx5_core 6353:00:02.0 enP25427s1: Link up Dec 13 01:27:54.971257 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:27:54.980937 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:27:54.981018 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:27:54.996745 kernel: hv_netvsc 0022487b-777d-0022-487b-777d0022487b eth0: Data path switched to VF: enP25427s1 Dec 13 01:27:54.997466 systemd-networkd[1482]: enP25427s1: Link UP Dec 13 01:27:54.997562 systemd-networkd[1482]: eth0: Link UP Dec 13 01:27:54.997568 systemd-networkd[1482]: eth0: Gained carrier Dec 13 01:27:54.997582 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:27:54.999697 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:27:55.005881 systemd[1]: Reached target network.target - Network. Dec 13 01:27:55.011904 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:27:55.012132 systemd-networkd[1482]: enP25427s1: Gained carrier Dec 13 01:27:55.025758 systemd-networkd[1482]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:27:55.294357 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:27:55.302624 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:27:56.805867 systemd-networkd[1482]: enP25427s1: Gained IPv6LL Dec 13 01:27:56.933896 systemd-networkd[1482]: eth0: Gained IPv6LL Dec 13 01:27:56.937736 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:27:56.945088 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:27:57.940433 ldconfig[1289]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:27:57.951302 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:27:57.963824 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:27:57.977547 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:27:57.985263 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:27:57.991624 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:27:57.999368 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:27:58.006933 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:27:58.013510 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:27:58.021238 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:27:58.029022 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:27:58.029055 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:27:58.034561 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:27:58.056125 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:27:58.065152 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:27:58.078290 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:27:58.084880 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:27:58.090924 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:27:58.096585 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:27:58.101972 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:27:58.101999 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:27:58.111799 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 01:27:58.119804 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:27:58.130904 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:27:58.139509 (chronyd)[1640]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 01:27:58.139841 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:27:58.148918 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:27:58.159901 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:27:58.163975 jq[1645]: false Dec 13 01:27:58.165494 chronyd[1649]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 01:27:58.167968 chronyd[1649]: Timezone right/UTC failed leap second check, ignoring Dec 13 01:27:58.168414 chronyd[1649]: Loaded seccomp filter (level 2) Dec 13 01:27:58.169073 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:27:58.169222 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 01:27:58.172873 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 01:27:58.182574 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 01:27:58.184072 KVP[1650]: KVP starting; pid is:1650 Dec 13 01:27:58.184440 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:27:58.193936 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:27:58.206291 kernel: hv_utils: KVP IC version 4.0 Dec 13 01:27:58.205857 KVP[1650]: KVP LIC Version: 3.1 Dec 13 01:27:58.212866 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:27:58.220905 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:27:58.236311 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:27:58.244786 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:27:58.254876 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:27:58.265620 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:27:58.266144 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:27:58.268404 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:27:58.277799 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:27:58.288827 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 01:27:58.290922 extend-filesystems[1647]: Found loop4 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found loop5 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found loop6 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found loop7 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found sda Dec 13 01:27:58.290922 extend-filesystems[1647]: Found sda1 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found sda2 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found sda3 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found usr Dec 13 01:27:58.290922 extend-filesystems[1647]: Found sda4 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found sda6 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found sda7 Dec 13 01:27:58.290922 extend-filesystems[1647]: Found sda9 Dec 13 01:27:58.290922 extend-filesystems[1647]: Checking size of /dev/sda9 Dec 13 01:27:58.536480 coreos-metadata[1642]: Dec 13 01:27:58.429 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:27:58.536480 coreos-metadata[1642]: Dec 13 01:27:58.431 INFO Fetch successful Dec 13 01:27:58.536480 coreos-metadata[1642]: Dec 13 01:27:58.432 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:27:58.536480 coreos-metadata[1642]: Dec 13 01:27:58.439 INFO Fetch successful Dec 13 01:27:58.536480 coreos-metadata[1642]: Dec 13 01:27:58.439 INFO Fetching http://168.63.129.16/machine/ad2bb8cf-939c-4c49-932b-8b1154771194/bf3813f6%2D4bc0%2D4ec1%2D9eab%2D6a3e7aceb350.%5Fci%2D4081.2.1%2Da%2Dc1e94b9ee1?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:27:58.536480 coreos-metadata[1642]: Dec 13 01:27:58.445 INFO Fetch successful Dec 13 01:27:58.536480 coreos-metadata[1642]: Dec 13 01:27:58.447 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:27:58.536480 coreos-metadata[1642]: Dec 13 01:27:58.463 INFO Fetch successful Dec 13 01:27:58.536746 extend-filesystems[1647]: Old size kept for /dev/sda9 Dec 13 01:27:58.536746 extend-filesystems[1647]: Found sr0 Dec 13 01:27:58.548413 jq[1671]: true Dec 13 01:27:58.308028 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:27:58.306747 dbus-daemon[1643]: [system] SELinux support is enabled Dec 13 01:27:58.554860 update_engine[1668]: I20241213 01:27:58.412343 1668 main.cc:92] Flatcar Update Engine starting Dec 13 01:27:58.554860 update_engine[1668]: I20241213 01:27:58.422874 1668 update_check_scheduler.cc:74] Next update check in 2m19s Dec 13 01:27:58.328915 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:27:58.520719 dbus-daemon[1643]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 01:27:58.329376 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:27:58.331054 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:27:58.556465 tar[1681]: linux-arm64/helm Dec 13 01:27:58.331215 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:27:58.556803 jq[1688]: true Dec 13 01:27:58.338375 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:27:58.364968 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:27:58.365183 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:27:58.401190 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:27:58.412036 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:27:58.438805 systemd-logind[1661]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Dec 13 01:27:58.442749 systemd-logind[1661]: New seat seat0. Dec 13 01:27:58.451080 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:27:58.504401 (ntainerd)[1696]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:27:58.512695 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:27:58.512732 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:27:58.529046 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:27:58.529067 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:27:58.549100 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:27:58.569950 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:27:58.581653 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:27:58.602477 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:27:58.629553 bash[1724]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:27:58.632717 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:27:58.645739 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:27:58.662811 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (1689) Dec 13 01:27:58.866744 locksmithd[1731]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:27:58.959642 sshd_keygen[1669]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:27:58.981313 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:27:58.998133 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:27:59.007502 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 01:27:59.014621 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:27:59.020452 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:27:59.039080 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:27:59.066869 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 01:27:59.081713 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:27:59.099305 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:27:59.107879 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Dec 13 01:27:59.119283 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:27:59.228571 tar[1681]: linux-arm64/LICENSE Dec 13 01:27:59.228686 tar[1681]: linux-arm64/README.md Dec 13 01:27:59.242298 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:27:59.370837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:27:59.381808 (kubelet)[1797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:27:59.405417 containerd[1696]: time="2024-12-13T01:27:59.405333380Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:27:59.431465 containerd[1696]: time="2024-12-13T01:27:59.431406700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:59.433117 containerd[1696]: time="2024-12-13T01:27:59.433076020Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:59.433217 containerd[1696]: time="2024-12-13T01:27:59.433203300Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:27:59.433274 containerd[1696]: time="2024-12-13T01:27:59.433262260Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:27:59.433499 containerd[1696]: time="2024-12-13T01:27:59.433480900Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:27:59.433570 containerd[1696]: time="2024-12-13T01:27:59.433557420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:59.433733 containerd[1696]: time="2024-12-13T01:27:59.433712300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:59.433801 containerd[1696]: time="2024-12-13T01:27:59.433788060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:59.434027 containerd[1696]: time="2024-12-13T01:27:59.434006260Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:59.434090 containerd[1696]: time="2024-12-13T01:27:59.434075700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:59.434155 containerd[1696]: time="2024-12-13T01:27:59.434140340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:59.434203 containerd[1696]: time="2024-12-13T01:27:59.434191700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:59.434361 containerd[1696]: time="2024-12-13T01:27:59.434342300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:59.434709 containerd[1696]: time="2024-12-13T01:27:59.434686820Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:27:59.434907 containerd[1696]: time="2024-12-13T01:27:59.434887540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:27:59.434980 containerd[1696]: time="2024-12-13T01:27:59.434967340Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:27:59.435116 containerd[1696]: time="2024-12-13T01:27:59.435099260Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:27:59.435219 containerd[1696]: time="2024-12-13T01:27:59.435204780Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:27:59.454651 containerd[1696]: time="2024-12-13T01:27:59.454598820Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:27:59.454952 containerd[1696]: time="2024-12-13T01:27:59.454901660Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:27:59.454991 containerd[1696]: time="2024-12-13T01:27:59.454977500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:27:59.455018 containerd[1696]: time="2024-12-13T01:27:59.454998620Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:27:59.455018 containerd[1696]: time="2024-12-13T01:27:59.455015020Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:27:59.455216 containerd[1696]: time="2024-12-13T01:27:59.455191060Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:27:59.455597 containerd[1696]: time="2024-12-13T01:27:59.455572060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:27:59.456233 containerd[1696]: time="2024-12-13T01:27:59.456199700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:27:59.456233 containerd[1696]: time="2024-12-13T01:27:59.456230780Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:27:59.456325 containerd[1696]: time="2024-12-13T01:27:59.456246380Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:27:59.456325 containerd[1696]: time="2024-12-13T01:27:59.456260540Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:27:59.456325 containerd[1696]: time="2024-12-13T01:27:59.456272980Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:27:59.456325 containerd[1696]: time="2024-12-13T01:27:59.456285940Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:27:59.456325 containerd[1696]: time="2024-12-13T01:27:59.456301420Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:27:59.456829 containerd[1696]: time="2024-12-13T01:27:59.456803140Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:27:59.456858 containerd[1696]: time="2024-12-13T01:27:59.456831740Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:27:59.456858 containerd[1696]: time="2024-12-13T01:27:59.456848460Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:27:59.456900 containerd[1696]: time="2024-12-13T01:27:59.456861500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:27:59.456900 containerd[1696]: time="2024-12-13T01:27:59.456889060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.456941 containerd[1696]: time="2024-12-13T01:27:59.456904300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.456941 containerd[1696]: time="2024-12-13T01:27:59.456917420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.456941 containerd[1696]: time="2024-12-13T01:27:59.456930380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457002 containerd[1696]: time="2024-12-13T01:27:59.456942460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457002 containerd[1696]: time="2024-12-13T01:27:59.456955740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457002 containerd[1696]: time="2024-12-13T01:27:59.456966740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457002 containerd[1696]: time="2024-12-13T01:27:59.456979220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457002 containerd[1696]: time="2024-12-13T01:27:59.456992980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457087 containerd[1696]: time="2024-12-13T01:27:59.457007340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457087 containerd[1696]: time="2024-12-13T01:27:59.457019820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457087 containerd[1696]: time="2024-12-13T01:27:59.457031500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457087 containerd[1696]: time="2024-12-13T01:27:59.457043820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457087 containerd[1696]: time="2024-12-13T01:27:59.457062260Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:27:59.457193 containerd[1696]: time="2024-12-13T01:27:59.457090300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457193 containerd[1696]: time="2024-12-13T01:27:59.457103140Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457193 containerd[1696]: time="2024-12-13T01:27:59.457114300Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:27:59.457193 containerd[1696]: time="2024-12-13T01:27:59.457168060Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:27:59.457193 containerd[1696]: time="2024-12-13T01:27:59.457186020Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:27:59.457404 containerd[1696]: time="2024-12-13T01:27:59.457196820Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:27:59.457404 containerd[1696]: time="2024-12-13T01:27:59.457208900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:27:59.457404 containerd[1696]: time="2024-12-13T01:27:59.457219100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.457404 containerd[1696]: time="2024-12-13T01:27:59.457230620Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:27:59.457404 containerd[1696]: time="2024-12-13T01:27:59.457241420Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:27:59.457404 containerd[1696]: time="2024-12-13T01:27:59.457251340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:27:59.458997 containerd[1696]: time="2024-12-13T01:27:59.458441100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:27:59.459123 containerd[1696]: time="2024-12-13T01:27:59.459004540Z" level=info msg="Connect containerd service" Dec 13 01:27:59.459123 containerd[1696]: time="2024-12-13T01:27:59.459050660Z" level=info msg="using legacy CRI server" Dec 13 01:27:59.459123 containerd[1696]: time="2024-12-13T01:27:59.459058060Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:27:59.459185 containerd[1696]: time="2024-12-13T01:27:59.459156700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:27:59.460936 containerd[1696]: time="2024-12-13T01:27:59.460880580Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:27:59.461404 containerd[1696]: time="2024-12-13T01:27:59.461094540Z" level=info msg="Start subscribing containerd event" Dec 13 01:27:59.461404 containerd[1696]: time="2024-12-13T01:27:59.461153020Z" level=info msg="Start recovering state" Dec 13 01:27:59.461404 containerd[1696]: time="2024-12-13T01:27:59.461230780Z" level=info msg="Start event monitor" Dec 13 01:27:59.461404 containerd[1696]: time="2024-12-13T01:27:59.461241500Z" level=info msg="Start snapshots syncer" Dec 13 01:27:59.461404 containerd[1696]: time="2024-12-13T01:27:59.461252220Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:27:59.461404 containerd[1696]: time="2024-12-13T01:27:59.461259420Z" level=info msg="Start streaming server" Dec 13 01:27:59.463816 containerd[1696]: time="2024-12-13T01:27:59.463783460Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:27:59.463987 containerd[1696]: time="2024-12-13T01:27:59.463971180Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:27:59.464124 containerd[1696]: time="2024-12-13T01:27:59.464094500Z" level=info msg="containerd successfully booted in 0.059539s" Dec 13 01:27:59.464203 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:27:59.472974 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:27:59.479483 systemd[1]: Startup finished in 710ms (kernel) + 11.624s (initrd) + 10.593s (userspace) = 22.928s. Dec 13 01:27:59.790808 login[1787]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:59.796093 login[1788]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:27:59.801169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:27:59.814100 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:27:59.816635 systemd-logind[1661]: New session 1 of user core. Dec 13 01:27:59.820789 systemd-logind[1661]: New session 2 of user core. Dec 13 01:27:59.842758 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:27:59.849979 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:27:59.856020 (systemd)[1813]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:27:59.908992 kubelet[1797]: E1213 01:27:59.908952 1797 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:27:59.911716 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:27:59.911858 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:27:59.979821 systemd[1813]: Queued start job for default target default.target. Dec 13 01:27:59.991185 systemd[1813]: Created slice app.slice - User Application Slice. Dec 13 01:27:59.991401 systemd[1813]: Reached target paths.target - Paths. Dec 13 01:27:59.991490 systemd[1813]: Reached target timers.target - Timers. Dec 13 01:27:59.992842 systemd[1813]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:28:00.011497 systemd[1813]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:28:00.011759 systemd[1813]: Reached target sockets.target - Sockets. Dec 13 01:28:00.011889 systemd[1813]: Reached target basic.target - Basic System. Dec 13 01:28:00.012005 systemd[1813]: Reached target default.target - Main User Target. Dec 13 01:28:00.012107 systemd[1813]: Startup finished in 149ms. Dec 13 01:28:00.012259 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:28:00.016821 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:28:00.017517 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:28:01.070814 waagent[1785]: 2024-12-13T01:28:01.070716Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 01:28:01.076825 waagent[1785]: 2024-12-13T01:28:01.076752Z INFO Daemon Daemon OS: flatcar 4081.2.1 Dec 13 01:28:01.081872 waagent[1785]: 2024-12-13T01:28:01.081811Z INFO Daemon Daemon Python: 3.11.9 Dec 13 01:28:01.086336 waagent[1785]: 2024-12-13T01:28:01.086272Z INFO Daemon Daemon Run daemon Dec 13 01:28:01.090349 waagent[1785]: 2024-12-13T01:28:01.090279Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.1' Dec 13 01:28:01.100035 waagent[1785]: 2024-12-13T01:28:01.099960Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:28:01.106486 waagent[1785]: 2024-12-13T01:28:01.106436Z INFO Daemon Daemon Activate resource disk Dec 13 01:28:01.112312 waagent[1785]: 2024-12-13T01:28:01.112253Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:28:01.124704 waagent[1785]: 2024-12-13T01:28:01.124623Z INFO Daemon Daemon Found device: None Dec 13 01:28:01.129901 waagent[1785]: 2024-12-13T01:28:01.129845Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:28:01.140041 waagent[1785]: 2024-12-13T01:28:01.139984Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:28:01.154231 waagent[1785]: 2024-12-13T01:28:01.154170Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:28:01.159983 waagent[1785]: 2024-12-13T01:28:01.159930Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:28:01.172161 waagent[1785]: 2024-12-13T01:28:01.171592Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 01:28:01.185920 waagent[1785]: 2024-12-13T01:28:01.185854Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:28:01.196193 waagent[1785]: 2024-12-13T01:28:01.196126Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:28:01.201756 waagent[1785]: 2024-12-13T01:28:01.201694Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:28:01.348124 waagent[1785]: 2024-12-13T01:28:01.347170Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:28:01.363814 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:28:01.364916 waagent[1785]: 2024-12-13T01:28:01.364640Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:28:01.370478 waagent[1785]: 2024-12-13T01:28:01.370410Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:28:01.376623 waagent[1785]: 2024-12-13T01:28:01.376559Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:28:01.383791 waagent[1785]: 2024-12-13T01:28:01.383723Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:28:01.389938 waagent[1785]: 2024-12-13T01:28:01.389885Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:28:01.395352 waagent[1785]: 2024-12-13T01:28:01.395296Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:28:01.443202 waagent[1785]: 2024-12-13T01:28:01.443150Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:28:01.450378 waagent[1785]: 2024-12-13T01:28:01.450348Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:28:01.455839 waagent[1785]: 2024-12-13T01:28:01.455785Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:28:01.705342 waagent[1785]: 2024-12-13T01:28:01.705181Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:28:01.712357 waagent[1785]: 2024-12-13T01:28:01.712286Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 01:28:01.725465 waagent[1785]: 2024-12-13T01:28:01.725413Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:28:01.745894 waagent[1785]: 2024-12-13T01:28:01.745850Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 01:28:01.752544 waagent[1785]: 2024-12-13T01:28:01.752497Z INFO Daemon Dec 13 01:28:01.755890 waagent[1785]: 2024-12-13T01:28:01.755846Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 562a478d-707c-487d-9f5d-8ea2982747b3 eTag: 3302272623766868605 source: Fabric] Dec 13 01:28:01.769279 waagent[1785]: 2024-12-13T01:28:01.769229Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 01:28:01.776722 waagent[1785]: 2024-12-13T01:28:01.776650Z INFO Daemon Dec 13 01:28:01.780089 waagent[1785]: 2024-12-13T01:28:01.780040Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:28:01.794264 waagent[1785]: 2024-12-13T01:28:01.794227Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 01:28:01.880797 waagent[1785]: 2024-12-13T01:28:01.880707Z INFO Daemon Downloaded certificate {'thumbprint': '900FDA67C77A3FBAF35E8905B3C32DF0DF76A571', 'hasPrivateKey': False} Dec 13 01:28:01.891762 waagent[1785]: 2024-12-13T01:28:01.891705Z INFO Daemon Downloaded certificate {'thumbprint': '0A4D25AC39F928185E76DB450482849830AD96F9', 'hasPrivateKey': True} Dec 13 01:28:01.902247 waagent[1785]: 2024-12-13T01:28:01.902193Z INFO Daemon Fetch goal state completed Dec 13 01:28:01.914436 waagent[1785]: 2024-12-13T01:28:01.914369Z INFO Daemon Daemon Starting provisioning Dec 13 01:28:01.919552 waagent[1785]: 2024-12-13T01:28:01.919488Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:28:01.924987 waagent[1785]: 2024-12-13T01:28:01.924937Z INFO Daemon Daemon Set hostname [ci-4081.2.1-a-c1e94b9ee1] Dec 13 01:28:01.952028 waagent[1785]: 2024-12-13T01:28:01.951940Z INFO Daemon Daemon Publish hostname [ci-4081.2.1-a-c1e94b9ee1] Dec 13 01:28:01.958844 waagent[1785]: 2024-12-13T01:28:01.958731Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:28:01.965289 waagent[1785]: 2024-12-13T01:28:01.965226Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:28:02.009832 systemd-networkd[1482]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:28:02.010389 systemd-networkd[1482]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:28:02.010430 systemd-networkd[1482]: eth0: DHCP lease lost Dec 13 01:28:02.011192 waagent[1785]: 2024-12-13T01:28:02.011065Z INFO Daemon Daemon Create user account if not exists Dec 13 01:28:02.017188 waagent[1785]: 2024-12-13T01:28:02.017112Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:28:02.023194 waagent[1785]: 2024-12-13T01:28:02.023115Z INFO Daemon Daemon Configure sudoer Dec 13 01:28:02.023334 systemd-networkd[1482]: eth0: DHCPv6 lease lost Dec 13 01:28:02.028393 waagent[1785]: 2024-12-13T01:28:02.028280Z INFO Daemon Daemon Configure sshd Dec 13 01:28:02.033498 waagent[1785]: 2024-12-13T01:28:02.033428Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 01:28:02.046411 waagent[1785]: 2024-12-13T01:28:02.046327Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:28:02.056728 systemd-networkd[1482]: eth0: DHCPv4 address 10.200.20.11/24, gateway 10.200.20.1 acquired from 168.63.129.16 Dec 13 01:28:03.158665 waagent[1785]: 2024-12-13T01:28:03.158605Z INFO Daemon Daemon Provisioning complete Dec 13 01:28:03.177923 waagent[1785]: 2024-12-13T01:28:03.177871Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:28:03.184296 waagent[1785]: 2024-12-13T01:28:03.184236Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:28:03.193816 waagent[1785]: 2024-12-13T01:28:03.193750Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 01:28:03.333243 waagent[1869]: 2024-12-13T01:28:03.332541Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 01:28:03.333243 waagent[1869]: 2024-12-13T01:28:03.332719Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.1 Dec 13 01:28:03.333243 waagent[1869]: 2024-12-13T01:28:03.332783Z INFO ExtHandler ExtHandler Python: 3.11.9 Dec 13 01:28:03.377021 waagent[1869]: 2024-12-13T01:28:03.376943Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:28:03.377324 waagent[1869]: 2024-12-13T01:28:03.377285Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:28:03.377468 waagent[1869]: 2024-12-13T01:28:03.377433Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:28:03.386131 waagent[1869]: 2024-12-13T01:28:03.386055Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:28:03.392891 waagent[1869]: 2024-12-13T01:28:03.392827Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:28:03.393688 waagent[1869]: 2024-12-13T01:28:03.393629Z INFO ExtHandler Dec 13 01:28:03.393859 waagent[1869]: 2024-12-13T01:28:03.393821Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: afb27139-1ba5-468d-8f35-f231b80f51a1 eTag: 3302272623766868605 source: Fabric] Dec 13 01:28:03.394303 waagent[1869]: 2024-12-13T01:28:03.394257Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:28:03.395186 waagent[1869]: 2024-12-13T01:28:03.395125Z INFO ExtHandler Dec 13 01:28:03.395696 waagent[1869]: 2024-12-13T01:28:03.395338Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:28:03.399903 waagent[1869]: 2024-12-13T01:28:03.399864Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:28:03.493012 waagent[1869]: 2024-12-13T01:28:03.492890Z INFO ExtHandler Downloaded certificate {'thumbprint': '900FDA67C77A3FBAF35E8905B3C32DF0DF76A571', 'hasPrivateKey': False} Dec 13 01:28:03.493699 waagent[1869]: 2024-12-13T01:28:03.493525Z INFO ExtHandler Downloaded certificate {'thumbprint': '0A4D25AC39F928185E76DB450482849830AD96F9', 'hasPrivateKey': True} Dec 13 01:28:03.494190 waagent[1869]: 2024-12-13T01:28:03.494141Z INFO ExtHandler Fetch goal state completed Dec 13 01:28:03.511801 waagent[1869]: 2024-12-13T01:28:03.511741Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1869 Dec 13 01:28:03.512699 waagent[1869]: 2024-12-13T01:28:03.512048Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 01:28:03.513831 waagent[1869]: 2024-12-13T01:28:03.513780Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.1', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:28:03.514229 waagent[1869]: 2024-12-13T01:28:03.514188Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:28:03.543093 waagent[1869]: 2024-12-13T01:28:03.543047Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:28:03.543295 waagent[1869]: 2024-12-13T01:28:03.543254Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:28:03.549415 waagent[1869]: 2024-12-13T01:28:03.549363Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:28:03.556191 systemd[1]: Reloading requested from client PID 1884 ('systemctl') (unit waagent.service)... Dec 13 01:28:03.556208 systemd[1]: Reloading... Dec 13 01:28:03.634687 zram_generator::config[1916]: No configuration found. Dec 13 01:28:03.750152 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:03.831175 systemd[1]: Reloading finished in 274 ms. Dec 13 01:28:03.859428 waagent[1869]: 2024-12-13T01:28:03.855903Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 01:28:03.862876 systemd[1]: Reloading requested from client PID 1972 ('systemctl') (unit waagent.service)... Dec 13 01:28:03.862891 systemd[1]: Reloading... Dec 13 01:28:03.934686 zram_generator::config[2006]: No configuration found. Dec 13 01:28:04.047310 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:04.125340 systemd[1]: Reloading finished in 262 ms. Dec 13 01:28:04.149077 waagent[1869]: 2024-12-13T01:28:04.147892Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 01:28:04.149077 waagent[1869]: 2024-12-13T01:28:04.148060Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 01:28:05.120687 waagent[1869]: 2024-12-13T01:28:05.119735Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:28:05.120687 waagent[1869]: 2024-12-13T01:28:05.120368Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 01:28:05.121621 waagent[1869]: 2024-12-13T01:28:05.121566Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:28:05.121845 waagent[1869]: 2024-12-13T01:28:05.121791Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:28:05.122022 waagent[1869]: 2024-12-13T01:28:05.121984Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:28:05.122458 waagent[1869]: 2024-12-13T01:28:05.122407Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:28:05.122894 waagent[1869]: 2024-12-13T01:28:05.122837Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:28:05.123062 waagent[1869]: 2024-12-13T01:28:05.123023Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:28:05.123147 waagent[1869]: 2024-12-13T01:28:05.123115Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:28:05.123297 waagent[1869]: 2024-12-13T01:28:05.123257Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:28:05.123359 waagent[1869]: 2024-12-13T01:28:05.123331Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:28:05.123406 waagent[1869]: 2024-12-13T01:28:05.123382Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:28:05.123627 waagent[1869]: 2024-12-13T01:28:05.123556Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:28:05.123926 waagent[1869]: 2024-12-13T01:28:05.123766Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:28:05.124367 waagent[1869]: 2024-12-13T01:28:05.124305Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:28:05.124713 waagent[1869]: 2024-12-13T01:28:05.124618Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:28:05.124784 waagent[1869]: 2024-12-13T01:28:05.124732Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:28:05.127246 waagent[1869]: 2024-12-13T01:28:05.127103Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:28:05.127246 waagent[1869]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:28:05.127246 waagent[1869]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:28:05.127246 waagent[1869]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:28:05.127246 waagent[1869]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:28:05.127246 waagent[1869]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:28:05.127246 waagent[1869]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:28:05.136693 waagent[1869]: 2024-12-13T01:28:05.135118Z INFO ExtHandler ExtHandler Dec 13 01:28:05.136693 waagent[1869]: 2024-12-13T01:28:05.135233Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: a6e7e0aa-9172-4c84-9c41-5f528e295e96 correlation 33c49f86-496a-46e2-a869-9a48717eb8bb created: 2024-12-13T01:26:48.998184Z] Dec 13 01:28:05.136693 waagent[1869]: 2024-12-13T01:28:05.135633Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:28:05.136693 waagent[1869]: 2024-12-13T01:28:05.136238Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 13 01:28:05.186759 waagent[1869]: 2024-12-13T01:28:05.186630Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D0519F74-225B-4BC8-96FE-11AA5B8295C2;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 01:28:05.225503 waagent[1869]: 2024-12-13T01:28:05.225050Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:28:05.225503 waagent[1869]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:28:05.225503 waagent[1869]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:28:05.225503 waagent[1869]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:77:7d brd ff:ff:ff:ff:ff:ff Dec 13 01:28:05.225503 waagent[1869]: 3: enP25427s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:7b:77:7d brd ff:ff:ff:ff:ff:ff\ altname enP25427p0s2 Dec 13 01:28:05.225503 waagent[1869]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:28:05.225503 waagent[1869]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:28:05.225503 waagent[1869]: 2: eth0 inet 10.200.20.11/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:28:05.225503 waagent[1869]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:28:05.225503 waagent[1869]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 01:28:05.225503 waagent[1869]: 2: eth0 inet6 fe80::222:48ff:fe7b:777d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:28:05.225503 waagent[1869]: 3: enP25427s1 inet6 fe80::222:48ff:fe7b:777d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:28:05.257701 waagent[1869]: 2024-12-13T01:28:05.257134Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 01:28:05.257701 waagent[1869]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:05.257701 waagent[1869]: pkts bytes target prot opt in out source destination Dec 13 01:28:05.257701 waagent[1869]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:05.257701 waagent[1869]: pkts bytes target prot opt in out source destination Dec 13 01:28:05.257701 waagent[1869]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:05.257701 waagent[1869]: pkts bytes target prot opt in out source destination Dec 13 01:28:05.257701 waagent[1869]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:28:05.257701 waagent[1869]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:28:05.257701 waagent[1869]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:28:05.260114 waagent[1869]: 2024-12-13T01:28:05.260050Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:28:05.260114 waagent[1869]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:05.260114 waagent[1869]: pkts bytes target prot opt in out source destination Dec 13 01:28:05.260114 waagent[1869]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:05.260114 waagent[1869]: pkts bytes target prot opt in out source destination Dec 13 01:28:05.260114 waagent[1869]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:28:05.260114 waagent[1869]: pkts bytes target prot opt in out source destination Dec 13 01:28:05.260114 waagent[1869]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:28:05.260114 waagent[1869]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:28:05.260114 waagent[1869]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:28:05.260371 waagent[1869]: 2024-12-13T01:28:05.260332Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:28:10.162533 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:28:10.170856 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:10.269304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:10.273862 (kubelet)[2101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:10.365689 kubelet[2101]: E1213 01:28:10.365597 2101 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:10.368489 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:10.368614 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:20.619204 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:28:20.626852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:20.721145 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:20.730013 (kubelet)[2117]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:20.803238 kubelet[2117]: E1213 01:28:20.803166 2117 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:20.806034 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:20.806313 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:21.970574 chronyd[1649]: Selected source PHC0 Dec 13 01:28:25.581317 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:28:25.585920 systemd[1]: Started sshd@0-10.200.20.11:22-10.200.16.10:35184.service - OpenSSH per-connection server daemon (10.200.16.10:35184). Dec 13 01:28:26.068825 sshd[2127]: Accepted publickey for core from 10.200.16.10 port 35184 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:26.070174 sshd[2127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:26.073951 systemd-logind[1661]: New session 3 of user core. Dec 13 01:28:26.084875 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:28:26.448346 systemd[1]: Started sshd@1-10.200.20.11:22-10.200.16.10:35196.service - OpenSSH per-connection server daemon (10.200.16.10:35196). Dec 13 01:28:26.856441 sshd[2132]: Accepted publickey for core from 10.200.16.10 port 35196 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:26.857967 sshd[2132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:26.862820 systemd-logind[1661]: New session 4 of user core. Dec 13 01:28:26.868859 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:28:27.156911 sshd[2132]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:27.160045 systemd[1]: sshd@1-10.200.20.11:22-10.200.16.10:35196.service: Deactivated successfully. Dec 13 01:28:27.162983 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:28:27.164703 systemd-logind[1661]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:28:27.165626 systemd-logind[1661]: Removed session 4. Dec 13 01:28:27.234776 systemd[1]: Started sshd@2-10.200.20.11:22-10.200.16.10:35202.service - OpenSSH per-connection server daemon (10.200.16.10:35202). Dec 13 01:28:27.646758 sshd[2139]: Accepted publickey for core from 10.200.16.10 port 35202 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:27.648196 sshd[2139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:27.652114 systemd-logind[1661]: New session 5 of user core. Dec 13 01:28:27.658839 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:28:27.962955 sshd[2139]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:27.966529 systemd[1]: sshd@2-10.200.20.11:22-10.200.16.10:35202.service: Deactivated successfully. Dec 13 01:28:27.968346 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:28:27.969170 systemd-logind[1661]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:28:27.969988 systemd-logind[1661]: Removed session 5. Dec 13 01:28:28.040953 systemd[1]: Started sshd@3-10.200.20.11:22-10.200.16.10:35212.service - OpenSSH per-connection server daemon (10.200.16.10:35212). Dec 13 01:28:28.459349 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 35212 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:28.460841 sshd[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:28.464604 systemd-logind[1661]: New session 6 of user core. Dec 13 01:28:28.472856 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:28:28.765807 sshd[2146]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:28.769523 systemd[1]: sshd@3-10.200.20.11:22-10.200.16.10:35212.service: Deactivated successfully. Dec 13 01:28:28.771798 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:28:28.772792 systemd-logind[1661]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:28:28.773851 systemd-logind[1661]: Removed session 6. Dec 13 01:28:28.840513 systemd[1]: Started sshd@4-10.200.20.11:22-10.200.16.10:38102.service - OpenSSH per-connection server daemon (10.200.16.10:38102). Dec 13 01:28:29.252324 sshd[2153]: Accepted publickey for core from 10.200.16.10 port 38102 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:29.254263 sshd[2153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:29.258613 systemd-logind[1661]: New session 7 of user core. Dec 13 01:28:29.263827 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:28:29.635332 sudo[2156]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:28:29.635608 sudo[2156]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:28:29.665707 sudo[2156]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:29.751544 sshd[2153]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:29.754720 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:28:29.756099 systemd[1]: sshd@4-10.200.20.11:22-10.200.16.10:38102.service: Deactivated successfully. Dec 13 01:28:29.758293 systemd-logind[1661]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:28:29.759518 systemd-logind[1661]: Removed session 7. Dec 13 01:28:29.824719 systemd[1]: Started sshd@5-10.200.20.11:22-10.200.16.10:38118.service - OpenSSH per-connection server daemon (10.200.16.10:38118). Dec 13 01:28:30.232647 sshd[2161]: Accepted publickey for core from 10.200.16.10 port 38118 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:30.234131 sshd[2161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:30.238914 systemd-logind[1661]: New session 8 of user core. Dec 13 01:28:30.241903 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:28:30.467992 sudo[2165]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:28:30.468260 sudo[2165]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:28:30.471421 sudo[2165]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:30.476134 sudo[2164]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:28:30.476388 sudo[2164]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:28:30.487911 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:28:30.491077 auditctl[2168]: No rules Dec 13 01:28:30.491395 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:28:30.491568 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:28:30.497254 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:28:30.517761 augenrules[2186]: No rules Dec 13 01:28:30.518652 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:28:30.519638 sudo[2164]: pam_unix(sudo:session): session closed for user root Dec 13 01:28:30.605896 sshd[2161]: pam_unix(sshd:session): session closed for user core Dec 13 01:28:30.609327 systemd[1]: sshd@5-10.200.20.11:22-10.200.16.10:38118.service: Deactivated successfully. Dec 13 01:28:30.610826 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:28:30.611415 systemd-logind[1661]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:28:30.612378 systemd-logind[1661]: Removed session 8. Dec 13 01:28:30.683146 systemd[1]: Started sshd@6-10.200.20.11:22-10.200.16.10:38128.service - OpenSSH per-connection server daemon (10.200.16.10:38128). Dec 13 01:28:31.030998 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:28:31.041927 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:31.109734 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 38128 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:28:31.110251 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:28:31.114341 systemd-logind[1661]: New session 9 of user core. Dec 13 01:28:31.118909 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:28:31.134095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:31.144961 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:31.186224 kubelet[2205]: E1213 01:28:31.186172 2205 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:31.188922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:31.189065 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:31.352492 sudo[2212]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:28:31.352789 sudo[2212]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:28:32.415090 (dockerd)[2227]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:28:32.415446 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:28:33.018982 dockerd[2227]: time="2024-12-13T01:28:33.018926632Z" level=info msg="Starting up" Dec 13 01:28:33.465427 dockerd[2227]: time="2024-12-13T01:28:33.465380247Z" level=info msg="Loading containers: start." Dec 13 01:28:33.606699 kernel: Initializing XFRM netlink socket Dec 13 01:28:33.741479 systemd-networkd[1482]: docker0: Link UP Dec 13 01:28:33.768987 dockerd[2227]: time="2024-12-13T01:28:33.768938682Z" level=info msg="Loading containers: done." Dec 13 01:28:33.779981 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2725863925-merged.mount: Deactivated successfully. Dec 13 01:28:33.794549 dockerd[2227]: time="2024-12-13T01:28:33.794407239Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:28:33.794549 dockerd[2227]: time="2024-12-13T01:28:33.794601719Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:28:33.794549 dockerd[2227]: time="2024-12-13T01:28:33.794761599Z" level=info msg="Daemon has completed initialization" Dec 13 01:28:33.858374 dockerd[2227]: time="2024-12-13T01:28:33.857922189Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:28:33.858710 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:28:35.203554 containerd[1696]: time="2024-12-13T01:28:35.203514313Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Dec 13 01:28:36.064600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount670040947.mount: Deactivated successfully. Dec 13 01:28:38.341796 containerd[1696]: time="2024-12-13T01:28:38.341742499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:38.344286 containerd[1696]: time="2024-12-13T01:28:38.344244940Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=29864010" Dec 13 01:28:38.348187 containerd[1696]: time="2024-12-13T01:28:38.348135540Z" level=info msg="ImageCreate event name:\"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:38.352913 containerd[1696]: time="2024-12-13T01:28:38.352831101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:38.354217 containerd[1696]: time="2024-12-13T01:28:38.354017302Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"29860810\" in 3.150459309s" Dec 13 01:28:38.354217 containerd[1696]: time="2024-12-13T01:28:38.354052782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:8202e87ffef091fe4f11dd113ff6f2ab16c70279775d224ddd8aa95e2dd0b966\"" Dec 13 01:28:38.374264 containerd[1696]: time="2024-12-13T01:28:38.374225386Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Dec 13 01:28:40.885615 containerd[1696]: time="2024-12-13T01:28:40.885542966Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:40.889839 containerd[1696]: time="2024-12-13T01:28:40.889558807Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=26900694" Dec 13 01:28:40.895567 containerd[1696]: time="2024-12-13T01:28:40.895519528Z" level=info msg="ImageCreate event name:\"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:40.902863 containerd[1696]: time="2024-12-13T01:28:40.902802849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:40.904121 containerd[1696]: time="2024-12-13T01:28:40.904005730Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"28303015\" in 2.529737624s" Dec 13 01:28:40.904121 containerd[1696]: time="2024-12-13T01:28:40.904041690Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:4b2191aa4d4d6ca9fbd7704b35401bfa6b0b90de75db22c425053e97fd5c8338\"" Dec 13 01:28:40.924715 containerd[1696]: time="2024-12-13T01:28:40.924682854Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Dec 13 01:28:41.354841 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:28:41.364934 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:41.464314 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:41.476939 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:41.517653 kubelet[2443]: E1213 01:28:41.517552 2443 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:41.519728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:41.519852 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:42.099687 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Dec 13 01:28:42.657920 containerd[1696]: time="2024-12-13T01:28:42.657864347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:42.660600 containerd[1696]: time="2024-12-13T01:28:42.660464747Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=16164332" Dec 13 01:28:42.666387 containerd[1696]: time="2024-12-13T01:28:42.666329308Z" level=info msg="ImageCreate event name:\"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:42.672436 containerd[1696]: time="2024-12-13T01:28:42.672380990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:42.673523 containerd[1696]: time="2024-12-13T01:28:42.673492550Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"17566671\" in 1.748575536s" Dec 13 01:28:42.673764 containerd[1696]: time="2024-12-13T01:28:42.673622710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:d43326c1723208785a33cdc1507082792eb041ca0d789c103c90180e31f65ca8\"" Dec 13 01:28:42.694843 containerd[1696]: time="2024-12-13T01:28:42.694756075Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 01:28:43.558817 update_engine[1668]: I20241213 01:28:43.558733 1668 update_attempter.cc:509] Updating boot flags... Dec 13 01:28:43.661729 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 46 scanned by (udev-worker) (2476) Dec 13 01:28:43.902881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount50102190.mount: Deactivated successfully. Dec 13 01:28:45.154416 containerd[1696]: time="2024-12-13T01:28:45.154362524Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:45.157381 containerd[1696]: time="2024-12-13T01:28:45.157337484Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=25662011" Dec 13 01:28:45.161813 containerd[1696]: time="2024-12-13T01:28:45.161725123Z" level=info msg="ImageCreate event name:\"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:45.168889 containerd[1696]: time="2024-12-13T01:28:45.168831043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:45.169713 containerd[1696]: time="2024-12-13T01:28:45.169450723Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"25661030\" in 2.474654968s" Dec 13 01:28:45.169713 containerd[1696]: time="2024-12-13T01:28:45.169486403Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:4612aebc0675831aedbbde7cd56b85db91f1fdcf05ef923072961538ec497adb\"" Dec 13 01:28:45.190433 containerd[1696]: time="2024-12-13T01:28:45.190398361Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:28:45.947009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1183772779.mount: Deactivated successfully. Dec 13 01:28:46.922700 containerd[1696]: time="2024-12-13T01:28:46.922502748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:46.926357 containerd[1696]: time="2024-12-13T01:28:46.926323468Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Dec 13 01:28:46.930303 containerd[1696]: time="2024-12-13T01:28:46.930262268Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:46.937049 containerd[1696]: time="2024-12-13T01:28:46.936986547Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:46.938247 containerd[1696]: time="2024-12-13T01:28:46.937995987Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.747271546s" Dec 13 01:28:46.938247 containerd[1696]: time="2024-12-13T01:28:46.938036587Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 01:28:46.960386 containerd[1696]: time="2024-12-13T01:28:46.960307265Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:28:47.613397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4115073898.mount: Deactivated successfully. Dec 13 01:28:47.639704 containerd[1696]: time="2024-12-13T01:28:47.639280677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:47.642722 containerd[1696]: time="2024-12-13T01:28:47.642645237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Dec 13 01:28:47.649689 containerd[1696]: time="2024-12-13T01:28:47.649623396Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:47.656290 containerd[1696]: time="2024-12-13T01:28:47.656237356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:47.657120 containerd[1696]: time="2024-12-13T01:28:47.656994236Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 696.565571ms" Dec 13 01:28:47.657120 containerd[1696]: time="2024-12-13T01:28:47.657027196Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 01:28:47.676492 containerd[1696]: time="2024-12-13T01:28:47.676451234Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Dec 13 01:28:48.404833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2471569436.mount: Deactivated successfully. Dec 13 01:28:51.604861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:28:51.613852 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:51.707759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:51.718969 (kubelet)[2617]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:28:51.757287 kubelet[2617]: E1213 01:28:51.757244 2617 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:28:51.759356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:28:51.759480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:28:53.057370 containerd[1696]: time="2024-12-13T01:28:53.057312181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:53.060147 containerd[1696]: time="2024-12-13T01:28:53.059877061Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Dec 13 01:28:53.064681 containerd[1696]: time="2024-12-13T01:28:53.064633181Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:53.075396 containerd[1696]: time="2024-12-13T01:28:53.075334260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:28:53.076613 containerd[1696]: time="2024-12-13T01:28:53.076467859Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 5.399975945s" Dec 13 01:28:53.076613 containerd[1696]: time="2024-12-13T01:28:53.076508299Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Dec 13 01:28:57.582101 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:57.593765 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:28:57.615630 systemd[1]: Reloading requested from client PID 2692 ('systemctl') (unit session-9.scope)... Dec 13 01:28:57.615800 systemd[1]: Reloading... Dec 13 01:28:57.707694 zram_generator::config[2730]: No configuration found. Dec 13 01:28:57.809705 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:28:57.890473 systemd[1]: Reloading finished in 274 ms. Dec 13 01:28:57.925221 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:28:57.925297 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:28:57.925592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:28:57.931975 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:00.252186 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:00.264977 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:00.303365 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:00.303365 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:00.303365 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:00.303789 kubelet[2796]: I1213 01:29:00.303408 2796 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:01.545694 kubelet[2796]: I1213 01:29:01.543849 2796 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:29:01.545694 kubelet[2796]: I1213 01:29:01.543878 2796 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:01.545694 kubelet[2796]: I1213 01:29:01.544083 2796 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:29:01.555814 kubelet[2796]: E1213 01:29:01.555772 2796 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:01.555953 kubelet[2796]: I1213 01:29:01.555840 2796 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:01.565610 kubelet[2796]: I1213 01:29:01.565579 2796 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:01.565814 kubelet[2796]: I1213 01:29:01.565785 2796 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:01.565974 kubelet[2796]: I1213 01:29:01.565814 2796 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-a-c1e94b9ee1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:29:01.566057 kubelet[2796]: I1213 01:29:01.565985 2796 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:01.566057 kubelet[2796]: I1213 01:29:01.565993 2796 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:29:01.566141 kubelet[2796]: I1213 01:29:01.566123 2796 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:01.567202 kubelet[2796]: I1213 01:29:01.567185 2796 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:29:01.567250 kubelet[2796]: I1213 01:29:01.567209 2796 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:01.567250 kubelet[2796]: I1213 01:29:01.567237 2796 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:29:01.568412 kubelet[2796]: I1213 01:29:01.567251 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:01.570344 kubelet[2796]: W1213 01:29:01.569411 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:01.570344 kubelet[2796]: E1213 01:29:01.569478 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:01.570344 kubelet[2796]: W1213 01:29:01.569762 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-c1e94b9ee1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:01.570344 kubelet[2796]: E1213 01:29:01.569802 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-c1e94b9ee1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:01.570344 kubelet[2796]: I1213 01:29:01.569890 2796 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:01.570344 kubelet[2796]: I1213 01:29:01.570081 2796 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:01.570344 kubelet[2796]: W1213 01:29:01.570136 2796 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:29:01.572141 kubelet[2796]: I1213 01:29:01.571923 2796 server.go:1264] "Started kubelet" Dec 13 01:29:01.575482 kubelet[2796]: I1213 01:29:01.573762 2796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:01.575482 kubelet[2796]: I1213 01:29:01.574282 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:01.575482 kubelet[2796]: I1213 01:29:01.574604 2796 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:01.575482 kubelet[2796]: I1213 01:29:01.574609 2796 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:29:01.576039 kubelet[2796]: E1213 01:29:01.575930 2796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-c1e94b9ee1.18109853a5f8e947 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-c1e94b9ee1,UID:ci-4081.2.1-a-c1e94b9ee1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-c1e94b9ee1,},FirstTimestamp:2024-12-13 01:29:01.571901767 +0000 UTC m=+1.304039699,LastTimestamp:2024-12-13 01:29:01.571901767 +0000 UTC m=+1.304039699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-c1e94b9ee1,}" Dec 13 01:29:01.576603 kubelet[2796]: I1213 01:29:01.576579 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:01.578909 kubelet[2796]: E1213 01:29:01.578890 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:01.579035 kubelet[2796]: I1213 01:29:01.579026 2796 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:29:01.579190 kubelet[2796]: I1213 01:29:01.579179 2796 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:29:01.579300 kubelet[2796]: I1213 01:29:01.579291 2796 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:29:01.579753 kubelet[2796]: W1213 01:29:01.579700 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:01.579864 kubelet[2796]: E1213 01:29:01.579852 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:01.581166 kubelet[2796]: E1213 01:29:01.581127 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-c1e94b9ee1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="200ms" Dec 13 01:29:01.581428 kubelet[2796]: E1213 01:29:01.581409 2796 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:29:01.582085 kubelet[2796]: I1213 01:29:01.582067 2796 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:01.582256 kubelet[2796]: I1213 01:29:01.582240 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:01.583442 kubelet[2796]: I1213 01:29:01.583413 2796 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:01.703007 kubelet[2796]: I1213 01:29:01.702983 2796 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:01.703656 kubelet[2796]: E1213 01:29:01.703620 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:01.703909 kubelet[2796]: I1213 01:29:01.703888 2796 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:01.703909 kubelet[2796]: I1213 01:29:01.703903 2796 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:01.703992 kubelet[2796]: I1213 01:29:01.703924 2796 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:01.782437 kubelet[2796]: E1213 01:29:01.782390 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-c1e94b9ee1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="400ms" Dec 13 01:29:01.906681 kubelet[2796]: I1213 01:29:01.906311 2796 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:01.906681 kubelet[2796]: E1213 01:29:01.906606 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.183540 kubelet[2796]: E1213 01:29:02.183421 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-c1e94b9ee1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="800ms" Dec 13 01:29:02.308584 kubelet[2796]: I1213 01:29:02.308547 2796 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.308886 kubelet[2796]: E1213 01:29:02.308860 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.629261 kubelet[2796]: W1213 01:29:02.629175 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-c1e94b9ee1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:02.629261 kubelet[2796]: E1213 01:29:02.629242 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-c1e94b9ee1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:02.697463 kubelet[2796]: I1213 01:29:02.697429 2796 policy_none.go:49] "None policy: Start" Dec 13 01:29:02.698597 kubelet[2796]: I1213 01:29:02.698247 2796 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:02.698597 kubelet[2796]: I1213 01:29:02.698276 2796 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:02.709937 kubelet[2796]: I1213 01:29:02.709901 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:02.711379 kubelet[2796]: I1213 01:29:02.711087 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:02.711379 kubelet[2796]: I1213 01:29:02.711119 2796 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:02.711379 kubelet[2796]: I1213 01:29:02.711137 2796 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:29:02.711379 kubelet[2796]: E1213 01:29:02.711175 2796 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:02.713387 kubelet[2796]: W1213 01:29:02.713343 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:02.713387 kubelet[2796]: E1213 01:29:02.713385 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:02.748041 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 01:29:02.758813 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 01:29:02.763203 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 01:29:02.776015 kubelet[2796]: I1213 01:29:02.775490 2796 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:02.776015 kubelet[2796]: I1213 01:29:02.775726 2796 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:29:02.776015 kubelet[2796]: I1213 01:29:02.775830 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:02.777719 kubelet[2796]: E1213 01:29:02.777699 2796 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:02.811667 kubelet[2796]: I1213 01:29:02.811610 2796 topology_manager.go:215] "Topology Admit Handler" podUID="182a65a06722d27adc4c23f16e98ba09" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.813387 kubelet[2796]: I1213 01:29:02.813347 2796 topology_manager.go:215] "Topology Admit Handler" podUID="6e9a1f67d9d23bc7c1450e693b91c45a" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.814627 kubelet[2796]: I1213 01:29:02.814593 2796 topology_manager.go:215] "Topology Admit Handler" podUID="645892387c96903d8485742d7346eace" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.821932 systemd[1]: Created slice kubepods-burstable-pod182a65a06722d27adc4c23f16e98ba09.slice - libcontainer container kubepods-burstable-pod182a65a06722d27adc4c23f16e98ba09.slice. Dec 13 01:29:02.844465 systemd[1]: Created slice kubepods-burstable-pod6e9a1f67d9d23bc7c1450e693b91c45a.slice - libcontainer container kubepods-burstable-pod6e9a1f67d9d23bc7c1450e693b91c45a.slice. Dec 13 01:29:02.858011 systemd[1]: Created slice kubepods-burstable-pod645892387c96903d8485742d7346eace.slice - libcontainer container kubepods-burstable-pod645892387c96903d8485742d7346eace.slice. Dec 13 01:29:02.864271 kubelet[2796]: W1213 01:29:02.864215 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:02.864271 kubelet[2796]: E1213 01:29:02.864276 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:02.886571 kubelet[2796]: I1213 01:29:02.886466 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.886571 kubelet[2796]: I1213 01:29:02.886506 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.886571 kubelet[2796]: I1213 01:29:02.886525 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/645892387c96903d8485742d7346eace-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"645892387c96903d8485742d7346eace\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.886571 kubelet[2796]: I1213 01:29:02.886542 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/645892387c96903d8485742d7346eace-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"645892387c96903d8485742d7346eace\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.886571 kubelet[2796]: I1213 01:29:02.886571 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.886777 kubelet[2796]: I1213 01:29:02.886590 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.886777 kubelet[2796]: I1213 01:29:02.886606 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.886777 kubelet[2796]: I1213 01:29:02.886624 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e9a1f67d9d23bc7c1450e693b91c45a-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"6e9a1f67d9d23bc7c1450e693b91c45a\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.886777 kubelet[2796]: I1213 01:29:02.886637 2796 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/645892387c96903d8485742d7346eace-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"645892387c96903d8485742d7346eace\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:02.925931 kubelet[2796]: W1213 01:29:02.925846 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:02.925931 kubelet[2796]: E1213 01:29:02.925908 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:02.984441 kubelet[2796]: E1213 01:29:02.984394 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-c1e94b9ee1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="1.6s" Dec 13 01:29:03.110951 kubelet[2796]: I1213 01:29:03.110920 2796 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:03.111235 kubelet[2796]: E1213 01:29:03.111202 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:03.143473 containerd[1696]: time="2024-12-13T01:29:03.143298255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1,Uid:182a65a06722d27adc4c23f16e98ba09,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:03.156515 containerd[1696]: time="2024-12-13T01:29:03.156316850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-c1e94b9ee1,Uid:6e9a1f67d9d23bc7c1450e693b91c45a,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:03.161154 containerd[1696]: time="2024-12-13T01:29:03.161121329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-c1e94b9ee1,Uid:645892387c96903d8485742d7346eace,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:03.558957 kubelet[2796]: E1213 01:29:03.558859 2796 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:03.758675 kubelet[2796]: W1213 01:29:03.758624 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:03.758675 kubelet[2796]: E1213 01:29:03.758684 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:04.054291 kubelet[2796]: E1213 01:29:04.054185 2796 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.11:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.11:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-c1e94b9ee1.18109853a5f8e947 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-c1e94b9ee1,UID:ci-4081.2.1-a-c1e94b9ee1,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-c1e94b9ee1,},FirstTimestamp:2024-12-13 01:29:01.571901767 +0000 UTC m=+1.304039699,LastTimestamp:2024-12-13 01:29:01.571901767 +0000 UTC m=+1.304039699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-c1e94b9ee1,}" Dec 13 01:29:04.585544 kubelet[2796]: E1213 01:29:04.585492 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-c1e94b9ee1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="3.2s" Dec 13 01:29:04.713984 kubelet[2796]: I1213 01:29:04.713647 2796 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:04.714230 kubelet[2796]: E1213 01:29:04.714208 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:04.864133 kubelet[2796]: W1213 01:29:04.864077 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-c1e94b9ee1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:04.864133 kubelet[2796]: E1213 01:29:04.864115 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-c1e94b9ee1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:05.060341 kubelet[2796]: W1213 01:29:05.060308 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:05.060341 kubelet[2796]: E1213 01:29:05.060350 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:05.638720 kubelet[2796]: W1213 01:29:05.638681 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:05.638720 kubelet[2796]: E1213 01:29:05.638724 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:06.633920 kubelet[2796]: W1213 01:29:06.633858 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:06.633920 kubelet[2796]: E1213 01:29:06.633900 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.11:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:07.702190 kubelet[2796]: E1213 01:29:07.702156 2796 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.11:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:08.146129 kubelet[2796]: E1213 01:29:07.786354 2796 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.11:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-c1e94b9ee1?timeout=10s\": dial tcp 10.200.20.11:6443: connect: connection refused" interval="6.4s" Dec 13 01:29:08.146129 kubelet[2796]: I1213 01:29:07.916490 2796 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:08.146129 kubelet[2796]: E1213 01:29:07.916794 2796 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.11:6443/api/v1/nodes\": dial tcp 10.200.20.11:6443: connect: connection refused" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:08.870080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66707321.mount: Deactivated successfully. Dec 13 01:29:08.905157 containerd[1696]: time="2024-12-13T01:29:08.905105117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:08.909327 containerd[1696]: time="2024-12-13T01:29:08.909291237Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Dec 13 01:29:08.931696 containerd[1696]: time="2024-12-13T01:29:08.931321874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:08.935838 containerd[1696]: time="2024-12-13T01:29:08.935696714Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:08.981260 containerd[1696]: time="2024-12-13T01:29:08.981221709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:09.042514 containerd[1696]: time="2024-12-13T01:29:09.041889222Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:09.046961 containerd[1696]: time="2024-12-13T01:29:09.046928781Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:29:09.086184 containerd[1696]: time="2024-12-13T01:29:09.086095417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:29:09.087232 containerd[1696]: time="2024-12-13T01:29:09.087001017Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 5.943618042s" Dec 13 01:29:09.088026 kubelet[2796]: W1213 01:29:09.087962 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:09.088026 kubelet[2796]: E1213 01:29:09.088010 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.11:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:09.088691 containerd[1696]: time="2024-12-13T01:29:09.088534497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 5.932142447s" Dec 13 01:29:09.184776 containerd[1696]: time="2024-12-13T01:29:09.184323166Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 6.023143477s" Dec 13 01:29:10.069999 kubelet[2796]: W1213 01:29:10.069962 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-c1e94b9ee1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:10.069999 kubelet[2796]: E1213 01:29:10.070004 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.11:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-c1e94b9ee1&limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:10.861395 containerd[1696]: time="2024-12-13T01:29:10.861293769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:10.861395 containerd[1696]: time="2024-12-13T01:29:10.861350209Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:10.861869 containerd[1696]: time="2024-12-13T01:29:10.861365729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:10.861869 containerd[1696]: time="2024-12-13T01:29:10.861437969Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:10.864736 containerd[1696]: time="2024-12-13T01:29:10.864654928Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:10.865081 containerd[1696]: time="2024-12-13T01:29:10.864939088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:10.865180 containerd[1696]: time="2024-12-13T01:29:10.865143008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:10.865599 containerd[1696]: time="2024-12-13T01:29:10.865561088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:10.866461 containerd[1696]: time="2024-12-13T01:29:10.866382688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:10.866461 containerd[1696]: time="2024-12-13T01:29:10.866434608Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:10.867063 containerd[1696]: time="2024-12-13T01:29:10.866468048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:10.867063 containerd[1696]: time="2024-12-13T01:29:10.866585928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:10.893825 systemd[1]: Started cri-containerd-8a19d954df7d46ed9dbe46d1597080a95c337da9d1d7133344b6f3239450292a.scope - libcontainer container 8a19d954df7d46ed9dbe46d1597080a95c337da9d1d7133344b6f3239450292a. Dec 13 01:29:10.895195 systemd[1]: Started cri-containerd-aab4a2f5393194c48d3718984f452a7d8cde856f5dea9845c0f36ad7ad556f92.scope - libcontainer container aab4a2f5393194c48d3718984f452a7d8cde856f5dea9845c0f36ad7ad556f92. Dec 13 01:29:10.900444 systemd[1]: Started cri-containerd-f7b4052a832fde88751aa45a89629fb2d7a8a99c76872c9a4bd492942e3fa21a.scope - libcontainer container f7b4052a832fde88751aa45a89629fb2d7a8a99c76872c9a4bd492942e3fa21a. Dec 13 01:29:10.915916 kubelet[2796]: W1213 01:29:10.915885 2796 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:10.916927 kubelet[2796]: E1213 01:29:10.916802 2796 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.11:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.11:6443: connect: connection refused Dec 13 01:29:10.942004 containerd[1696]: time="2024-12-13T01:29:10.941259759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-c1e94b9ee1,Uid:6e9a1f67d9d23bc7c1450e693b91c45a,Namespace:kube-system,Attempt:0,} returns sandbox id \"aab4a2f5393194c48d3718984f452a7d8cde856f5dea9845c0f36ad7ad556f92\"" Dec 13 01:29:10.947994 containerd[1696]: time="2024-12-13T01:29:10.947962958Z" level=info msg="CreateContainer within sandbox \"aab4a2f5393194c48d3718984f452a7d8cde856f5dea9845c0f36ad7ad556f92\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:29:10.957954 containerd[1696]: time="2024-12-13T01:29:10.957915037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1,Uid:182a65a06722d27adc4c23f16e98ba09,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a19d954df7d46ed9dbe46d1597080a95c337da9d1d7133344b6f3239450292a\"" Dec 13 01:29:10.958239 containerd[1696]: time="2024-12-13T01:29:10.958195636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-c1e94b9ee1,Uid:645892387c96903d8485742d7346eace,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7b4052a832fde88751aa45a89629fb2d7a8a99c76872c9a4bd492942e3fa21a\"" Dec 13 01:29:10.961072 containerd[1696]: time="2024-12-13T01:29:10.961038436Z" level=info msg="CreateContainer within sandbox \"f7b4052a832fde88751aa45a89629fb2d7a8a99c76872c9a4bd492942e3fa21a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:29:10.963797 containerd[1696]: time="2024-12-13T01:29:10.963762716Z" level=info msg="CreateContainer within sandbox \"8a19d954df7d46ed9dbe46d1597080a95c337da9d1d7133344b6f3239450292a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:29:11.038974 containerd[1696]: time="2024-12-13T01:29:11.038929226Z" level=info msg="CreateContainer within sandbox \"f7b4052a832fde88751aa45a89629fb2d7a8a99c76872c9a4bd492942e3fa21a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0513c71a1097ede0ea355cff685b54d6a5efff3bc8163eef1621cf1dc6d5dd3f\"" Dec 13 01:29:11.039600 containerd[1696]: time="2024-12-13T01:29:11.039574026Z" level=info msg="StartContainer for \"0513c71a1097ede0ea355cff685b54d6a5efff3bc8163eef1621cf1dc6d5dd3f\"" Dec 13 01:29:11.040597 containerd[1696]: time="2024-12-13T01:29:11.040521226Z" level=info msg="CreateContainer within sandbox \"aab4a2f5393194c48d3718984f452a7d8cde856f5dea9845c0f36ad7ad556f92\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a433fea81a172d6125dc2fbf3028195c4091517755c885170285d8a084317519\"" Dec 13 01:29:11.041997 containerd[1696]: time="2024-12-13T01:29:11.041048906Z" level=info msg="StartContainer for \"a433fea81a172d6125dc2fbf3028195c4091517755c885170285d8a084317519\"" Dec 13 01:29:11.062364 containerd[1696]: time="2024-12-13T01:29:11.062331344Z" level=info msg="CreateContainer within sandbox \"8a19d954df7d46ed9dbe46d1597080a95c337da9d1d7133344b6f3239450292a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"15f7fdbfd86c5ec2fbca2b036363b8eefaeb12ba9a52aa604d519fdc337b1563\"" Dec 13 01:29:11.063761 containerd[1696]: time="2024-12-13T01:29:11.063731223Z" level=info msg="StartContainer for \"15f7fdbfd86c5ec2fbca2b036363b8eefaeb12ba9a52aa604d519fdc337b1563\"" Dec 13 01:29:11.064872 systemd[1]: Started cri-containerd-0513c71a1097ede0ea355cff685b54d6a5efff3bc8163eef1621cf1dc6d5dd3f.scope - libcontainer container 0513c71a1097ede0ea355cff685b54d6a5efff3bc8163eef1621cf1dc6d5dd3f. Dec 13 01:29:11.075241 systemd[1]: Started cri-containerd-a433fea81a172d6125dc2fbf3028195c4091517755c885170285d8a084317519.scope - libcontainer container a433fea81a172d6125dc2fbf3028195c4091517755c885170285d8a084317519. Dec 13 01:29:11.102824 systemd[1]: Started cri-containerd-15f7fdbfd86c5ec2fbca2b036363b8eefaeb12ba9a52aa604d519fdc337b1563.scope - libcontainer container 15f7fdbfd86c5ec2fbca2b036363b8eefaeb12ba9a52aa604d519fdc337b1563. Dec 13 01:29:11.117295 containerd[1696]: time="2024-12-13T01:29:11.117208017Z" level=info msg="StartContainer for \"0513c71a1097ede0ea355cff685b54d6a5efff3bc8163eef1621cf1dc6d5dd3f\" returns successfully" Dec 13 01:29:11.147610 containerd[1696]: time="2024-12-13T01:29:11.147494333Z" level=info msg="StartContainer for \"a433fea81a172d6125dc2fbf3028195c4091517755c885170285d8a084317519\" returns successfully" Dec 13 01:29:11.156267 containerd[1696]: time="2024-12-13T01:29:11.156205292Z" level=info msg="StartContainer for \"15f7fdbfd86c5ec2fbca2b036363b8eefaeb12ba9a52aa604d519fdc337b1563\" returns successfully" Dec 13 01:29:12.778067 kubelet[2796]: E1213 01:29:12.778027 2796 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:14.037127 kubelet[2796]: E1213 01:29:14.037091 2796 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.2.1-a-c1e94b9ee1" not found Dec 13 01:29:14.194387 kubelet[2796]: E1213 01:29:14.194338 2796 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-a-c1e94b9ee1\" not found" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:14.320350 kubelet[2796]: I1213 01:29:14.319297 2796 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:14.326598 kubelet[2796]: I1213 01:29:14.326489 2796 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:14.334757 kubelet[2796]: E1213 01:29:14.334715 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:14.435522 kubelet[2796]: E1213 01:29:14.435479 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:14.535978 kubelet[2796]: E1213 01:29:14.535934 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:14.636878 kubelet[2796]: E1213 01:29:14.636831 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:14.737772 kubelet[2796]: E1213 01:29:14.737724 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:14.838777 kubelet[2796]: E1213 01:29:14.838710 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:14.939907 kubelet[2796]: E1213 01:29:14.939753 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.040089 kubelet[2796]: E1213 01:29:15.040042 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.140900 kubelet[2796]: E1213 01:29:15.140861 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.241642 kubelet[2796]: E1213 01:29:15.241541 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.342797 kubelet[2796]: E1213 01:29:15.342745 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.443276 kubelet[2796]: E1213 01:29:15.443234 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.543892 kubelet[2796]: E1213 01:29:15.543781 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.644785 kubelet[2796]: E1213 01:29:15.644743 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.688074 systemd[1]: Reloading requested from client PID 3076 ('systemctl') (unit session-9.scope)... Dec 13 01:29:15.688090 systemd[1]: Reloading... Dec 13 01:29:15.745781 kubelet[2796]: E1213 01:29:15.745743 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.779691 zram_generator::config[3119]: No configuration found. Dec 13 01:29:15.846303 kubelet[2796]: E1213 01:29:15.846030 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.878548 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:29:15.946852 kubelet[2796]: E1213 01:29:15.946789 2796 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4081.2.1-a-c1e94b9ee1\" not found" Dec 13 01:29:15.970486 systemd[1]: Reloading finished in 282 ms. Dec 13 01:29:16.005832 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:16.016766 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:29:16.016971 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:16.017023 systemd[1]: kubelet.service: Consumed 1.618s CPU time, 110.1M memory peak, 0B memory swap peak. Dec 13 01:29:16.023181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:29:16.367263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:29:16.371980 (kubelet)[3180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:29:16.425040 kubelet[3180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:16.425040 kubelet[3180]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:29:16.425040 kubelet[3180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:29:16.425370 kubelet[3180]: I1213 01:29:16.425081 3180 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:29:16.431670 kubelet[3180]: I1213 01:29:16.431524 3180 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 01:29:16.431670 kubelet[3180]: I1213 01:29:16.431557 3180 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:29:16.431939 kubelet[3180]: I1213 01:29:16.431835 3180 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 01:29:16.435338 kubelet[3180]: I1213 01:29:16.434435 3180 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:29:16.436309 kubelet[3180]: I1213 01:29:16.435835 3180 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:29:16.445906 kubelet[3180]: I1213 01:29:16.445807 3180 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:29:16.446047 kubelet[3180]: I1213 01:29:16.445989 3180 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:29:16.446592 kubelet[3180]: I1213 01:29:16.446017 3180 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.1-a-c1e94b9ee1","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:29:16.446592 kubelet[3180]: I1213 01:29:16.446192 3180 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:29:16.446592 kubelet[3180]: I1213 01:29:16.446201 3180 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:29:16.446592 kubelet[3180]: I1213 01:29:16.446233 3180 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:16.446592 kubelet[3180]: I1213 01:29:16.446321 3180 kubelet.go:400] "Attempting to sync node with API server" Dec 13 01:29:16.446822 kubelet[3180]: I1213 01:29:16.446332 3180 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:29:16.446822 kubelet[3180]: I1213 01:29:16.446358 3180 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:29:16.446822 kubelet[3180]: I1213 01:29:16.446371 3180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:29:16.448761 kubelet[3180]: I1213 01:29:16.448156 3180 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:29:16.448761 kubelet[3180]: I1213 01:29:16.448322 3180 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:29:16.449174 kubelet[3180]: I1213 01:29:16.449131 3180 server.go:1264] "Started kubelet" Dec 13 01:29:16.452430 kubelet[3180]: I1213 01:29:16.452399 3180 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:29:16.453350 kubelet[3180]: I1213 01:29:16.453310 3180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:29:16.454633 kubelet[3180]: I1213 01:29:16.453493 3180 server.go:455] "Adding debug handlers to kubelet server" Dec 13 01:29:16.455626 kubelet[3180]: I1213 01:29:16.455191 3180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:29:16.456207 kubelet[3180]: I1213 01:29:16.455956 3180 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:29:16.458290 kubelet[3180]: I1213 01:29:16.457428 3180 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:29:16.458919 kubelet[3180]: I1213 01:29:16.458807 3180 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 01:29:16.458977 kubelet[3180]: I1213 01:29:16.458941 3180 reconciler.go:26] "Reconciler: start to sync state" Dec 13 01:29:16.460897 kubelet[3180]: I1213 01:29:16.460279 3180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:29:16.461259 kubelet[3180]: I1213 01:29:16.461090 3180 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:29:16.461259 kubelet[3180]: I1213 01:29:16.461130 3180 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:29:16.461259 kubelet[3180]: I1213 01:29:16.461146 3180 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 01:29:16.461259 kubelet[3180]: E1213 01:29:16.461182 3180 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:29:16.465483 kubelet[3180]: I1213 01:29:16.462680 3180 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:29:16.465483 kubelet[3180]: I1213 01:29:16.462771 3180 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:29:16.471491 kubelet[3180]: I1213 01:29:16.471406 3180 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:29:16.476966 kubelet[3180]: E1213 01:29:16.476933 3180 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:29:16.541642 kubelet[3180]: I1213 01:29:16.541609 3180 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:29:16.541642 kubelet[3180]: I1213 01:29:16.541630 3180 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:29:16.541844 kubelet[3180]: I1213 01:29:16.541655 3180 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:29:16.541957 kubelet[3180]: I1213 01:29:16.541931 3180 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:29:16.542006 kubelet[3180]: I1213 01:29:16.541950 3180 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:29:16.542006 kubelet[3180]: I1213 01:29:16.541970 3180 policy_none.go:49] "None policy: Start" Dec 13 01:29:16.542868 kubelet[3180]: I1213 01:29:16.542842 3180 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:29:16.542868 kubelet[3180]: I1213 01:29:16.542870 3180 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:29:16.543036 kubelet[3180]: I1213 01:29:16.543016 3180 state_mem.go:75] "Updated machine memory state" Dec 13 01:29:16.547324 kubelet[3180]: I1213 01:29:16.547145 3180 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:29:16.547423 kubelet[3180]: I1213 01:29:16.547324 3180 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 01:29:16.547423 kubelet[3180]: I1213 01:29:16.547418 3180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:29:16.561302 kubelet[3180]: I1213 01:29:16.561251 3180 topology_manager.go:215] "Topology Admit Handler" podUID="645892387c96903d8485742d7346eace" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.561437 kubelet[3180]: I1213 01:29:16.561363 3180 topology_manager.go:215] "Topology Admit Handler" podUID="182a65a06722d27adc4c23f16e98ba09" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.561437 kubelet[3180]: I1213 01:29:16.561401 3180 topology_manager.go:215] "Topology Admit Handler" podUID="6e9a1f67d9d23bc7c1450e693b91c45a" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.562127 kubelet[3180]: I1213 01:29:16.562101 3180 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.580057 kubelet[3180]: W1213 01:29:16.580023 3180 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:29:16.584450 kubelet[3180]: I1213 01:29:16.584406 3180 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.584541 kubelet[3180]: I1213 01:29:16.584497 3180 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.586128 kubelet[3180]: W1213 01:29:16.585878 3180 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:29:16.586128 kubelet[3180]: W1213 01:29:16.585971 3180 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:29:16.760645 kubelet[3180]: I1213 01:29:16.760405 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.760645 kubelet[3180]: I1213 01:29:16.760445 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.760645 kubelet[3180]: I1213 01:29:16.760466 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/645892387c96903d8485742d7346eace-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"645892387c96903d8485742d7346eace\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.760645 kubelet[3180]: I1213 01:29:16.760482 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/645892387c96903d8485742d7346eace-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"645892387c96903d8485742d7346eace\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.760645 kubelet[3180]: I1213 01:29:16.760497 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/645892387c96903d8485742d7346eace-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"645892387c96903d8485742d7346eace\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.760952 kubelet[3180]: I1213 01:29:16.760514 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.760952 kubelet[3180]: I1213 01:29:16.760530 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.760952 kubelet[3180]: I1213 01:29:16.760545 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/182a65a06722d27adc4c23f16e98ba09-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"182a65a06722d27adc4c23f16e98ba09\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:16.760952 kubelet[3180]: I1213 01:29:16.760561 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6e9a1f67d9d23bc7c1450e693b91c45a-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-c1e94b9ee1\" (UID: \"6e9a1f67d9d23bc7c1450e693b91c45a\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:17.448264 kubelet[3180]: I1213 01:29:17.448015 3180 apiserver.go:52] "Watching apiserver" Dec 13 01:29:17.559326 kubelet[3180]: I1213 01:29:17.559288 3180 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 01:29:17.564438 kubelet[3180]: W1213 01:29:17.563883 3180 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:29:17.564438 kubelet[3180]: E1213 01:29:17.563946 3180 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-c1e94b9ee1\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:29:17.594679 kubelet[3180]: I1213 01:29:17.594603 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-a-c1e94b9ee1" podStartSLOduration=1.59456709 podStartE2EDuration="1.59456709s" podCreationTimestamp="2024-12-13 01:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:17.577014212 +0000 UTC m=+1.201787331" watchObservedRunningTime="2024-12-13 01:29:17.59456709 +0000 UTC m=+1.219340209" Dec 13 01:29:17.625292 kubelet[3180]: I1213 01:29:17.625243 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-a-c1e94b9ee1" podStartSLOduration=1.625225846 podStartE2EDuration="1.625225846s" podCreationTimestamp="2024-12-13 01:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:17.59683689 +0000 UTC m=+1.221610009" watchObservedRunningTime="2024-12-13 01:29:17.625225846 +0000 UTC m=+1.249998965" Dec 13 01:29:17.640839 kubelet[3180]: I1213 01:29:17.640400 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-c1e94b9ee1" podStartSLOduration=1.640353684 podStartE2EDuration="1.640353684s" podCreationTimestamp="2024-12-13 01:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:17.625926166 +0000 UTC m=+1.250699285" watchObservedRunningTime="2024-12-13 01:29:17.640353684 +0000 UTC m=+1.265126803" Dec 13 01:29:21.322202 sudo[2212]: pam_unix(sudo:session): session closed for user root Dec 13 01:29:21.388186 sshd[2194]: pam_unix(sshd:session): session closed for user core Dec 13 01:29:21.392045 systemd[1]: sshd@6-10.200.20.11:22-10.200.16.10:38128.service: Deactivated successfully. Dec 13 01:29:21.393641 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:29:21.393822 systemd[1]: session-9.scope: Consumed 5.643s CPU time, 188.2M memory peak, 0B memory swap peak. Dec 13 01:29:21.394449 systemd-logind[1661]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:29:21.395939 systemd-logind[1661]: Removed session 9. Dec 13 01:29:30.803223 kubelet[3180]: I1213 01:29:30.802975 3180 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:29:30.803647 containerd[1696]: time="2024-12-13T01:29:30.803339221Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:29:30.803848 kubelet[3180]: I1213 01:29:30.803710 3180 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:29:31.272163 kubelet[3180]: I1213 01:29:31.271566 3180 topology_manager.go:215] "Topology Admit Handler" podUID="13f11c0c-6b27-46aa-af2e-0c3d82e090c1" podNamespace="kube-system" podName="kube-proxy-hxglx" Dec 13 01:29:31.281514 systemd[1]: Created slice kubepods-besteffort-pod13f11c0c_6b27_46aa_af2e_0c3d82e090c1.slice - libcontainer container kubepods-besteffort-pod13f11c0c_6b27_46aa_af2e_0c3d82e090c1.slice. Dec 13 01:29:31.353165 kubelet[3180]: I1213 01:29:31.352996 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13f11c0c-6b27-46aa-af2e-0c3d82e090c1-xtables-lock\") pod \"kube-proxy-hxglx\" (UID: \"13f11c0c-6b27-46aa-af2e-0c3d82e090c1\") " pod="kube-system/kube-proxy-hxglx" Dec 13 01:29:31.353165 kubelet[3180]: I1213 01:29:31.353038 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13f11c0c-6b27-46aa-af2e-0c3d82e090c1-lib-modules\") pod \"kube-proxy-hxglx\" (UID: \"13f11c0c-6b27-46aa-af2e-0c3d82e090c1\") " pod="kube-system/kube-proxy-hxglx" Dec 13 01:29:31.353165 kubelet[3180]: I1213 01:29:31.353058 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/13f11c0c-6b27-46aa-af2e-0c3d82e090c1-kube-proxy\") pod \"kube-proxy-hxglx\" (UID: \"13f11c0c-6b27-46aa-af2e-0c3d82e090c1\") " pod="kube-system/kube-proxy-hxglx" Dec 13 01:29:31.353165 kubelet[3180]: I1213 01:29:31.353075 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4jjv\" (UniqueName: \"kubernetes.io/projected/13f11c0c-6b27-46aa-af2e-0c3d82e090c1-kube-api-access-r4jjv\") pod \"kube-proxy-hxglx\" (UID: \"13f11c0c-6b27-46aa-af2e-0c3d82e090c1\") " pod="kube-system/kube-proxy-hxglx" Dec 13 01:29:31.591849 containerd[1696]: time="2024-12-13T01:29:31.591723535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hxglx,Uid:13f11c0c-6b27-46aa-af2e-0c3d82e090c1,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:31.636867 containerd[1696]: time="2024-12-13T01:29:31.636705531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:31.636867 containerd[1696]: time="2024-12-13T01:29:31.636770890Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:31.636867 containerd[1696]: time="2024-12-13T01:29:31.636783490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:31.636867 containerd[1696]: time="2024-12-13T01:29:31.636865930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:31.659844 systemd[1]: Started cri-containerd-90b1b41534f083131daa865558b4ec24638eb3e7c5eab865a68c70c8d31ec8c6.scope - libcontainer container 90b1b41534f083131daa865558b4ec24638eb3e7c5eab865a68c70c8d31ec8c6. Dec 13 01:29:31.690871 containerd[1696]: time="2024-12-13T01:29:31.690799885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hxglx,Uid:13f11c0c-6b27-46aa-af2e-0c3d82e090c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"90b1b41534f083131daa865558b4ec24638eb3e7c5eab865a68c70c8d31ec8c6\"" Dec 13 01:29:31.698899 containerd[1696]: time="2024-12-13T01:29:31.698735084Z" level=info msg="CreateContainer within sandbox \"90b1b41534f083131daa865558b4ec24638eb3e7c5eab865a68c70c8d31ec8c6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:29:31.754306 containerd[1696]: time="2024-12-13T01:29:31.754157238Z" level=info msg="CreateContainer within sandbox \"90b1b41534f083131daa865558b4ec24638eb3e7c5eab865a68c70c8d31ec8c6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"82bcd79687e981b989457eb09cbf1bb5955e45cf65b1fd07c3ab55bc85caf734\"" Dec 13 01:29:31.756345 containerd[1696]: time="2024-12-13T01:29:31.756303277Z" level=info msg="StartContainer for \"82bcd79687e981b989457eb09cbf1bb5955e45cf65b1fd07c3ab55bc85caf734\"" Dec 13 01:29:31.780852 systemd[1]: Started cri-containerd-82bcd79687e981b989457eb09cbf1bb5955e45cf65b1fd07c3ab55bc85caf734.scope - libcontainer container 82bcd79687e981b989457eb09cbf1bb5955e45cf65b1fd07c3ab55bc85caf734. Dec 13 01:29:31.814074 containerd[1696]: time="2024-12-13T01:29:31.814030431Z" level=info msg="StartContainer for \"82bcd79687e981b989457eb09cbf1bb5955e45cf65b1fd07c3ab55bc85caf734\" returns successfully" Dec 13 01:29:31.876222 kubelet[3180]: I1213 01:29:31.875125 3180 topology_manager.go:215] "Topology Admit Handler" podUID="65dcdbd6-6279-4f24-9433-bb037b428f49" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-p5jkg" Dec 13 01:29:31.883052 systemd[1]: Created slice kubepods-besteffort-pod65dcdbd6_6279_4f24_9433_bb037b428f49.slice - libcontainer container kubepods-besteffort-pod65dcdbd6_6279_4f24_9433_bb037b428f49.slice. Dec 13 01:29:31.958896 kubelet[3180]: I1213 01:29:31.958847 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ck44x\" (UniqueName: \"kubernetes.io/projected/65dcdbd6-6279-4f24-9433-bb037b428f49-kube-api-access-ck44x\") pod \"tigera-operator-7bc55997bb-p5jkg\" (UID: \"65dcdbd6-6279-4f24-9433-bb037b428f49\") " pod="tigera-operator/tigera-operator-7bc55997bb-p5jkg" Dec 13 01:29:31.958896 kubelet[3180]: I1213 01:29:31.958893 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/65dcdbd6-6279-4f24-9433-bb037b428f49-var-lib-calico\") pod \"tigera-operator-7bc55997bb-p5jkg\" (UID: \"65dcdbd6-6279-4f24-9433-bb037b428f49\") " pod="tigera-operator/tigera-operator-7bc55997bb-p5jkg" Dec 13 01:29:32.188610 containerd[1696]: time="2024-12-13T01:29:32.188483550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-p5jkg,Uid:65dcdbd6-6279-4f24-9433-bb037b428f49,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:29:32.238112 containerd[1696]: time="2024-12-13T01:29:32.237416145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:32.238112 containerd[1696]: time="2024-12-13T01:29:32.237989025Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:32.238112 containerd[1696]: time="2024-12-13T01:29:32.238003705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:32.238997 containerd[1696]: time="2024-12-13T01:29:32.238109225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:32.255876 systemd[1]: Started cri-containerd-0d6b04e9d3e59e1b7643cf3a90c88f5bc9bf470945980c24ffdb4c1948b48825.scope - libcontainer container 0d6b04e9d3e59e1b7643cf3a90c88f5bc9bf470945980c24ffdb4c1948b48825. Dec 13 01:29:32.284230 containerd[1696]: time="2024-12-13T01:29:32.283936340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-p5jkg,Uid:65dcdbd6-6279-4f24-9433-bb037b428f49,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"0d6b04e9d3e59e1b7643cf3a90c88f5bc9bf470945980c24ffdb4c1948b48825\"" Dec 13 01:29:32.286274 containerd[1696]: time="2024-12-13T01:29:32.286140140Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:29:32.561990 kubelet[3180]: I1213 01:29:32.561807 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hxglx" podStartSLOduration=1.56149979 podStartE2EDuration="1.56149979s" podCreationTimestamp="2024-12-13 01:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:29:32.56091019 +0000 UTC m=+16.185683269" watchObservedRunningTime="2024-12-13 01:29:32.56149979 +0000 UTC m=+16.186272869" Dec 13 01:29:34.442074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1051501001.mount: Deactivated successfully. Dec 13 01:29:34.837480 containerd[1696]: time="2024-12-13T01:29:34.837357060Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:34.840727 containerd[1696]: time="2024-12-13T01:29:34.840688819Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19126008" Dec 13 01:29:34.846274 containerd[1696]: time="2024-12-13T01:29:34.846211579Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:34.855334 containerd[1696]: time="2024-12-13T01:29:34.854512298Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:34.855334 containerd[1696]: time="2024-12-13T01:29:34.855225898Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.569051238s" Dec 13 01:29:34.855334 containerd[1696]: time="2024-12-13T01:29:34.855253098Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 01:29:34.859803 containerd[1696]: time="2024-12-13T01:29:34.859760257Z" level=info msg="CreateContainer within sandbox \"0d6b04e9d3e59e1b7643cf3a90c88f5bc9bf470945980c24ffdb4c1948b48825\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:29:34.907359 containerd[1696]: time="2024-12-13T01:29:34.907270172Z" level=info msg="CreateContainer within sandbox \"0d6b04e9d3e59e1b7643cf3a90c88f5bc9bf470945980c24ffdb4c1948b48825\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ef67014ebec6134716c73eaa18f37544c1220385f4678ff54aef500760e7ac6b\"" Dec 13 01:29:34.908214 containerd[1696]: time="2024-12-13T01:29:34.908166972Z" level=info msg="StartContainer for \"ef67014ebec6134716c73eaa18f37544c1220385f4678ff54aef500760e7ac6b\"" Dec 13 01:29:34.934881 systemd[1]: Started cri-containerd-ef67014ebec6134716c73eaa18f37544c1220385f4678ff54aef500760e7ac6b.scope - libcontainer container ef67014ebec6134716c73eaa18f37544c1220385f4678ff54aef500760e7ac6b. Dec 13 01:29:34.961535 containerd[1696]: time="2024-12-13T01:29:34.961372446Z" level=info msg="StartContainer for \"ef67014ebec6134716c73eaa18f37544c1220385f4678ff54aef500760e7ac6b\" returns successfully" Dec 13 01:29:38.766202 kubelet[3180]: I1213 01:29:38.765269 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-p5jkg" podStartSLOduration=5.193922581 podStartE2EDuration="7.765248659s" podCreationTimestamp="2024-12-13 01:29:31 +0000 UTC" firstStartedPulling="2024-12-13 01:29:32.28512538 +0000 UTC m=+15.909898499" lastFinishedPulling="2024-12-13 01:29:34.856451458 +0000 UTC m=+18.481224577" observedRunningTime="2024-12-13 01:29:35.565752738 +0000 UTC m=+19.190525857" watchObservedRunningTime="2024-12-13 01:29:38.765248659 +0000 UTC m=+22.390021778" Dec 13 01:29:38.766202 kubelet[3180]: I1213 01:29:38.765401 3180 topology_manager.go:215] "Topology Admit Handler" podUID="07617256-4393-4b62-bafd-3f482098b3ce" podNamespace="calico-system" podName="calico-typha-879897b5f-l8t9b" Dec 13 01:29:38.774426 systemd[1]: Created slice kubepods-besteffort-pod07617256_4393_4b62_bafd_3f482098b3ce.slice - libcontainer container kubepods-besteffort-pod07617256_4393_4b62_bafd_3f482098b3ce.slice. Dec 13 01:29:38.800681 kubelet[3180]: I1213 01:29:38.800623 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07617256-4393-4b62-bafd-3f482098b3ce-tigera-ca-bundle\") pod \"calico-typha-879897b5f-l8t9b\" (UID: \"07617256-4393-4b62-bafd-3f482098b3ce\") " pod="calico-system/calico-typha-879897b5f-l8t9b" Dec 13 01:29:38.800952 kubelet[3180]: I1213 01:29:38.800866 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/07617256-4393-4b62-bafd-3f482098b3ce-typha-certs\") pod \"calico-typha-879897b5f-l8t9b\" (UID: \"07617256-4393-4b62-bafd-3f482098b3ce\") " pod="calico-system/calico-typha-879897b5f-l8t9b" Dec 13 01:29:38.800952 kubelet[3180]: I1213 01:29:38.800897 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l5l9\" (UniqueName: \"kubernetes.io/projected/07617256-4393-4b62-bafd-3f482098b3ce-kube-api-access-7l5l9\") pod \"calico-typha-879897b5f-l8t9b\" (UID: \"07617256-4393-4b62-bafd-3f482098b3ce\") " pod="calico-system/calico-typha-879897b5f-l8t9b" Dec 13 01:29:38.867464 kubelet[3180]: I1213 01:29:38.867387 3180 topology_manager.go:215] "Topology Admit Handler" podUID="8d1b7d01-5ed8-47be-84f0-2c8dff431d86" podNamespace="calico-system" podName="calico-node-bbjfr" Dec 13 01:29:38.877022 systemd[1]: Created slice kubepods-besteffort-pod8d1b7d01_5ed8_47be_84f0_2c8dff431d86.slice - libcontainer container kubepods-besteffort-pod8d1b7d01_5ed8_47be_84f0_2c8dff431d86.slice. Dec 13 01:29:38.901553 kubelet[3180]: I1213 01:29:38.901498 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-node-certs\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901553 kubelet[3180]: I1213 01:29:38.901543 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-var-run-calico\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901553 kubelet[3180]: I1213 01:29:38.901562 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-cni-net-dir\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901767 kubelet[3180]: I1213 01:29:38.901577 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-lib-modules\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901767 kubelet[3180]: I1213 01:29:38.901593 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-policysync\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901767 kubelet[3180]: I1213 01:29:38.901608 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-cni-bin-dir\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901767 kubelet[3180]: I1213 01:29:38.901622 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-cni-log-dir\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901767 kubelet[3180]: I1213 01:29:38.901636 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-flexvol-driver-host\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901885 kubelet[3180]: I1213 01:29:38.901655 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-xtables-lock\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901885 kubelet[3180]: I1213 01:29:38.901776 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xl44l\" (UniqueName: \"kubernetes.io/projected/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-kube-api-access-xl44l\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901885 kubelet[3180]: I1213 01:29:38.901797 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-tigera-ca-bundle\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.901885 kubelet[3180]: I1213 01:29:38.901813 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8d1b7d01-5ed8-47be-84f0-2c8dff431d86-var-lib-calico\") pod \"calico-node-bbjfr\" (UID: \"8d1b7d01-5ed8-47be-84f0-2c8dff431d86\") " pod="calico-system/calico-node-bbjfr" Dec 13 01:29:38.994584 kubelet[3180]: I1213 01:29:38.994536 3180 topology_manager.go:215] "Topology Admit Handler" podUID="a57b7da1-6e0a-4e25-9020-f599a3a71d0b" podNamespace="calico-system" podName="csi-node-driver-z7bfm" Dec 13 01:29:38.994890 kubelet[3180]: E1213 01:29:38.994860 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7bfm" podUID="a57b7da1-6e0a-4e25-9020-f599a3a71d0b" Dec 13 01:29:39.004401 kubelet[3180]: E1213 01:29:39.004106 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.004629 kubelet[3180]: W1213 01:29:39.004592 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.004629 kubelet[3180]: E1213 01:29:39.004641 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.005374 kubelet[3180]: E1213 01:29:39.005036 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.005374 kubelet[3180]: W1213 01:29:39.005051 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.005540 kubelet[3180]: E1213 01:29:39.005504 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.005865 kubelet[3180]: E1213 01:29:39.005838 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.005865 kubelet[3180]: W1213 01:29:39.005856 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.005865 kubelet[3180]: E1213 01:29:39.005877 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.006098 kubelet[3180]: E1213 01:29:39.006060 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.006098 kubelet[3180]: W1213 01:29:39.006072 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.006098 kubelet[3180]: E1213 01:29:39.006082 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.006249 kubelet[3180]: E1213 01:29:39.006230 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.006249 kubelet[3180]: W1213 01:29:39.006242 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.006308 kubelet[3180]: E1213 01:29:39.006251 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.006748 kubelet[3180]: E1213 01:29:39.006727 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.006748 kubelet[3180]: W1213 01:29:39.006743 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.007475 kubelet[3180]: E1213 01:29:39.007446 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.007554 kubelet[3180]: E1213 01:29:39.007536 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.007554 kubelet[3180]: W1213 01:29:39.007545 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.007554 kubelet[3180]: E1213 01:29:39.007654 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.008392 kubelet[3180]: E1213 01:29:39.008283 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.008392 kubelet[3180]: W1213 01:29:39.008304 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.008392 kubelet[3180]: E1213 01:29:39.008363 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.008653 kubelet[3180]: E1213 01:29:39.008631 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.008653 kubelet[3180]: W1213 01:29:39.008649 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.009852 kubelet[3180]: E1213 01:29:39.009825 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.009852 kubelet[3180]: W1213 01:29:39.009844 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.010327 kubelet[3180]: E1213 01:29:39.010292 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.010327 kubelet[3180]: E1213 01:29:39.010322 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.010753 kubelet[3180]: E1213 01:29:39.010728 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.010753 kubelet[3180]: W1213 01:29:39.010746 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.010753 kubelet[3180]: E1213 01:29:39.010783 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.011157 kubelet[3180]: E1213 01:29:39.011127 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.011157 kubelet[3180]: W1213 01:29:39.011148 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.011273 kubelet[3180]: E1213 01:29:39.011250 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.011448 kubelet[3180]: E1213 01:29:39.011426 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.011448 kubelet[3180]: W1213 01:29:39.011443 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.011543 kubelet[3180]: E1213 01:29:39.011462 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.012188 kubelet[3180]: E1213 01:29:39.012161 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.012188 kubelet[3180]: W1213 01:29:39.012181 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.012593 kubelet[3180]: E1213 01:29:39.012296 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.012593 kubelet[3180]: E1213 01:29:39.012475 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.012593 kubelet[3180]: W1213 01:29:39.012485 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.014974 kubelet[3180]: E1213 01:29:39.014842 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.014974 kubelet[3180]: W1213 01:29:39.014866 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.015595 kubelet[3180]: E1213 01:29:39.015538 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.015595 kubelet[3180]: W1213 01:29:39.015555 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.015595 kubelet[3180]: E1213 01:29:39.015569 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.016315 kubelet[3180]: E1213 01:29:39.016233 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.016315 kubelet[3180]: W1213 01:29:39.016250 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.016315 kubelet[3180]: E1213 01:29:39.016262 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.016315 kubelet[3180]: E1213 01:29:39.016289 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.018718 kubelet[3180]: E1213 01:29:39.017644 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.018997 kubelet[3180]: E1213 01:29:39.017768 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.018997 kubelet[3180]: W1213 01:29:39.018879 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.018997 kubelet[3180]: E1213 01:29:39.018897 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.019977 kubelet[3180]: E1213 01:29:39.019850 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.019977 kubelet[3180]: W1213 01:29:39.019866 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.019977 kubelet[3180]: E1213 01:29:39.019885 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.021113 kubelet[3180]: E1213 01:29:39.021099 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.023668 kubelet[3180]: W1213 01:29:39.021744 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.023668 kubelet[3180]: E1213 01:29:39.021768 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.025755 kubelet[3180]: E1213 01:29:39.025738 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.025983 kubelet[3180]: W1213 01:29:39.025967 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.026058 kubelet[3180]: E1213 01:29:39.026046 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.032483 kubelet[3180]: E1213 01:29:39.032466 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.032592 kubelet[3180]: W1213 01:29:39.032578 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.032653 kubelet[3180]: E1213 01:29:39.032639 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.080263 containerd[1696]: time="2024-12-13T01:29:39.080213184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-879897b5f-l8t9b,Uid:07617256-4393-4b62-bafd-3f482098b3ce,Namespace:calico-system,Attempt:0,}" Dec 13 01:29:39.093845 kubelet[3180]: E1213 01:29:39.093803 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.093845 kubelet[3180]: W1213 01:29:39.093829 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.093845 kubelet[3180]: E1213 01:29:39.093852 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.094356 kubelet[3180]: E1213 01:29:39.094307 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.094356 kubelet[3180]: W1213 01:29:39.094332 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.094356 kubelet[3180]: E1213 01:29:39.094347 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.094718 kubelet[3180]: E1213 01:29:39.094612 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.094718 kubelet[3180]: W1213 01:29:39.094623 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.094718 kubelet[3180]: E1213 01:29:39.094634 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.095237 kubelet[3180]: E1213 01:29:39.094883 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.095237 kubelet[3180]: W1213 01:29:39.094894 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.095237 kubelet[3180]: E1213 01:29:39.094907 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.095237 kubelet[3180]: E1213 01:29:39.095194 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.095237 kubelet[3180]: W1213 01:29:39.095204 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.095237 kubelet[3180]: E1213 01:29:39.095213 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.095745 kubelet[3180]: E1213 01:29:39.095347 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.095745 kubelet[3180]: W1213 01:29:39.095354 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.095745 kubelet[3180]: E1213 01:29:39.095384 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.095745 kubelet[3180]: E1213 01:29:39.095534 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.095745 kubelet[3180]: W1213 01:29:39.095542 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.095745 kubelet[3180]: E1213 01:29:39.095551 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.095745 kubelet[3180]: E1213 01:29:39.095701 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.095745 kubelet[3180]: W1213 01:29:39.095710 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.095745 kubelet[3180]: E1213 01:29:39.095718 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.096430 kubelet[3180]: E1213 01:29:39.095903 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.096430 kubelet[3180]: W1213 01:29:39.095916 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.096430 kubelet[3180]: E1213 01:29:39.095925 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.096430 kubelet[3180]: E1213 01:29:39.096107 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.096430 kubelet[3180]: W1213 01:29:39.096117 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.096430 kubelet[3180]: E1213 01:29:39.096125 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.096943 kubelet[3180]: E1213 01:29:39.096923 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.096943 kubelet[3180]: W1213 01:29:39.096940 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.096943 kubelet[3180]: E1213 01:29:39.096952 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.097305 kubelet[3180]: E1213 01:29:39.097154 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.097305 kubelet[3180]: W1213 01:29:39.097164 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.097305 kubelet[3180]: E1213 01:29:39.097173 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.097875 kubelet[3180]: E1213 01:29:39.097855 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.097875 kubelet[3180]: W1213 01:29:39.097873 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.098241 kubelet[3180]: E1213 01:29:39.097884 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.098241 kubelet[3180]: E1213 01:29:39.098079 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.098241 kubelet[3180]: W1213 01:29:39.098088 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.098241 kubelet[3180]: E1213 01:29:39.098097 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.098764 kubelet[3180]: E1213 01:29:39.098431 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.098764 kubelet[3180]: W1213 01:29:39.098442 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.098764 kubelet[3180]: E1213 01:29:39.098452 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.098764 kubelet[3180]: E1213 01:29:39.098604 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.098764 kubelet[3180]: W1213 01:29:39.098612 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.098764 kubelet[3180]: E1213 01:29:39.098620 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.099444 kubelet[3180]: E1213 01:29:39.098802 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.099444 kubelet[3180]: W1213 01:29:39.098810 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.099444 kubelet[3180]: E1213 01:29:39.098819 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.099444 kubelet[3180]: E1213 01:29:39.098949 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.099444 kubelet[3180]: W1213 01:29:39.098955 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.099444 kubelet[3180]: E1213 01:29:39.098962 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.099444 kubelet[3180]: E1213 01:29:39.099138 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.099444 kubelet[3180]: W1213 01:29:39.099146 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.099444 kubelet[3180]: E1213 01:29:39.099155 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.099444 kubelet[3180]: E1213 01:29:39.099407 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.100056 kubelet[3180]: W1213 01:29:39.099416 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.100056 kubelet[3180]: E1213 01:29:39.099426 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.103853 kubelet[3180]: E1213 01:29:39.103813 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.103853 kubelet[3180]: W1213 01:29:39.103833 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.103853 kubelet[3180]: E1213 01:29:39.103847 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.103981 kubelet[3180]: I1213 01:29:39.103870 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a57b7da1-6e0a-4e25-9020-f599a3a71d0b-socket-dir\") pod \"csi-node-driver-z7bfm\" (UID: \"a57b7da1-6e0a-4e25-9020-f599a3a71d0b\") " pod="calico-system/csi-node-driver-z7bfm" Dec 13 01:29:39.104213 kubelet[3180]: E1213 01:29:39.104054 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.104213 kubelet[3180]: W1213 01:29:39.104070 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.104213 kubelet[3180]: E1213 01:29:39.104080 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.104213 kubelet[3180]: I1213 01:29:39.104097 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a57b7da1-6e0a-4e25-9020-f599a3a71d0b-registration-dir\") pod \"csi-node-driver-z7bfm\" (UID: \"a57b7da1-6e0a-4e25-9020-f599a3a71d0b\") " pod="calico-system/csi-node-driver-z7bfm" Dec 13 01:29:39.104578 kubelet[3180]: E1213 01:29:39.104480 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.104578 kubelet[3180]: W1213 01:29:39.104493 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.104578 kubelet[3180]: E1213 01:29:39.104515 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.104578 kubelet[3180]: I1213 01:29:39.104533 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bjkz\" (UniqueName: \"kubernetes.io/projected/a57b7da1-6e0a-4e25-9020-f599a3a71d0b-kube-api-access-8bjkz\") pod \"csi-node-driver-z7bfm\" (UID: \"a57b7da1-6e0a-4e25-9020-f599a3a71d0b\") " pod="calico-system/csi-node-driver-z7bfm" Dec 13 01:29:39.104973 kubelet[3180]: E1213 01:29:39.104946 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.104973 kubelet[3180]: W1213 01:29:39.104963 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.105077 kubelet[3180]: E1213 01:29:39.104987 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.105077 kubelet[3180]: I1213 01:29:39.105004 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a57b7da1-6e0a-4e25-9020-f599a3a71d0b-varrun\") pod \"csi-node-driver-z7bfm\" (UID: \"a57b7da1-6e0a-4e25-9020-f599a3a71d0b\") " pod="calico-system/csi-node-driver-z7bfm" Dec 13 01:29:39.105403 kubelet[3180]: E1213 01:29:39.105172 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.105403 kubelet[3180]: W1213 01:29:39.105185 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.105403 kubelet[3180]: E1213 01:29:39.105201 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.105403 kubelet[3180]: I1213 01:29:39.105214 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a57b7da1-6e0a-4e25-9020-f599a3a71d0b-kubelet-dir\") pod \"csi-node-driver-z7bfm\" (UID: \"a57b7da1-6e0a-4e25-9020-f599a3a71d0b\") " pod="calico-system/csi-node-driver-z7bfm" Dec 13 01:29:39.105766 kubelet[3180]: E1213 01:29:39.105446 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.105766 kubelet[3180]: W1213 01:29:39.105456 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.105766 kubelet[3180]: E1213 01:29:39.105536 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.105766 kubelet[3180]: E1213 01:29:39.105630 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.105766 kubelet[3180]: W1213 01:29:39.105637 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.105766 kubelet[3180]: E1213 01:29:39.105723 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.106553 kubelet[3180]: E1213 01:29:39.105814 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.106553 kubelet[3180]: W1213 01:29:39.105821 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.106553 kubelet[3180]: E1213 01:29:39.105899 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.106553 kubelet[3180]: E1213 01:29:39.105954 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.106553 kubelet[3180]: W1213 01:29:39.105962 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.106553 kubelet[3180]: E1213 01:29:39.106039 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.106553 kubelet[3180]: E1213 01:29:39.106124 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.106553 kubelet[3180]: W1213 01:29:39.106131 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.106553 kubelet[3180]: E1213 01:29:39.106144 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.106773 kubelet[3180]: E1213 01:29:39.106748 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.106773 kubelet[3180]: W1213 01:29:39.106768 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.106821 kubelet[3180]: E1213 01:29:39.106783 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.107228 kubelet[3180]: E1213 01:29:39.106965 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.107228 kubelet[3180]: W1213 01:29:39.106978 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.107228 kubelet[3180]: E1213 01:29:39.106987 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.107228 kubelet[3180]: E1213 01:29:39.107135 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.107228 kubelet[3180]: W1213 01:29:39.107143 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.107228 kubelet[3180]: E1213 01:29:39.107152 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.108222 kubelet[3180]: E1213 01:29:39.107299 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.108222 kubelet[3180]: W1213 01:29:39.107306 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.108222 kubelet[3180]: E1213 01:29:39.107314 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.108222 kubelet[3180]: E1213 01:29:39.107455 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.108222 kubelet[3180]: W1213 01:29:39.107462 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.108222 kubelet[3180]: E1213 01:29:39.107470 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.134579 containerd[1696]: time="2024-12-13T01:29:39.134353498Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:39.135558 containerd[1696]: time="2024-12-13T01:29:39.135331938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:39.135558 containerd[1696]: time="2024-12-13T01:29:39.135358258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:39.136535 containerd[1696]: time="2024-12-13T01:29:39.135656018Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:39.158881 systemd[1]: Started cri-containerd-f5445919e844ad9140b32a9b6338e9ed14d040f24faaeeb31bb5ae572163d43e.scope - libcontainer container f5445919e844ad9140b32a9b6338e9ed14d040f24faaeeb31bb5ae572163d43e. Dec 13 01:29:39.180429 containerd[1696]: time="2024-12-13T01:29:39.180370213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bbjfr,Uid:8d1b7d01-5ed8-47be-84f0-2c8dff431d86,Namespace:calico-system,Attempt:0,}" Dec 13 01:29:39.193110 containerd[1696]: time="2024-12-13T01:29:39.192612851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-879897b5f-l8t9b,Uid:07617256-4393-4b62-bafd-3f482098b3ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5445919e844ad9140b32a9b6338e9ed14d040f24faaeeb31bb5ae572163d43e\"" Dec 13 01:29:39.198013 containerd[1696]: time="2024-12-13T01:29:39.197876131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:29:39.206613 kubelet[3180]: E1213 01:29:39.206542 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.206613 kubelet[3180]: W1213 01:29:39.206563 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.206613 kubelet[3180]: E1213 01:29:39.206582 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.207231 kubelet[3180]: E1213 01:29:39.207132 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.207231 kubelet[3180]: W1213 01:29:39.207157 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.207231 kubelet[3180]: E1213 01:29:39.207176 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.207751 kubelet[3180]: E1213 01:29:39.207713 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.207751 kubelet[3180]: W1213 01:29:39.207741 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.207751 kubelet[3180]: E1213 01:29:39.207762 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.208235 kubelet[3180]: E1213 01:29:39.208203 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.208235 kubelet[3180]: W1213 01:29:39.208223 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.208458 kubelet[3180]: E1213 01:29:39.208437 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.209058 kubelet[3180]: E1213 01:29:39.208749 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.209836 kubelet[3180]: W1213 01:29:39.208767 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.210046 kubelet[3180]: E1213 01:29:39.210023 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.210256 kubelet[3180]: E1213 01:29:39.210232 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.210256 kubelet[3180]: W1213 01:29:39.210251 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.210256 kubelet[3180]: E1213 01:29:39.210288 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.210757 kubelet[3180]: E1213 01:29:39.210735 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.210757 kubelet[3180]: W1213 01:29:39.210753 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.210948 kubelet[3180]: E1213 01:29:39.210925 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.211311 kubelet[3180]: E1213 01:29:39.211284 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.211311 kubelet[3180]: W1213 01:29:39.211302 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.212139 kubelet[3180]: E1213 01:29:39.211827 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.212543 kubelet[3180]: E1213 01:29:39.212509 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.212786 kubelet[3180]: W1213 01:29:39.212756 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.212860 kubelet[3180]: E1213 01:29:39.212803 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.213923 kubelet[3180]: E1213 01:29:39.213895 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.213923 kubelet[3180]: W1213 01:29:39.213913 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.214116 kubelet[3180]: E1213 01:29:39.214018 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.214297 kubelet[3180]: E1213 01:29:39.214275 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.214297 kubelet[3180]: W1213 01:29:39.214292 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.214509 kubelet[3180]: E1213 01:29:39.214388 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.214720 kubelet[3180]: E1213 01:29:39.214698 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.214720 kubelet[3180]: W1213 01:29:39.214714 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.215100 kubelet[3180]: E1213 01:29:39.214777 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.215194 kubelet[3180]: E1213 01:29:39.215183 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.215227 kubelet[3180]: W1213 01:29:39.215194 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.215952 kubelet[3180]: E1213 01:29:39.215902 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.216174 kubelet[3180]: E1213 01:29:39.216148 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.216174 kubelet[3180]: W1213 01:29:39.216165 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.216174 kubelet[3180]: E1213 01:29:39.216201 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.216786 kubelet[3180]: E1213 01:29:39.216717 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.216786 kubelet[3180]: W1213 01:29:39.216726 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.216897 kubelet[3180]: E1213 01:29:39.216843 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.217044 kubelet[3180]: E1213 01:29:39.217027 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.217044 kubelet[3180]: W1213 01:29:39.217042 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.217726 kubelet[3180]: E1213 01:29:39.217699 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.218365 kubelet[3180]: E1213 01:29:39.218342 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.218365 kubelet[3180]: W1213 01:29:39.218356 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.218526 kubelet[3180]: E1213 01:29:39.218458 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.218696 kubelet[3180]: E1213 01:29:39.218676 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.218696 kubelet[3180]: W1213 01:29:39.218692 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.218913 kubelet[3180]: E1213 01:29:39.218808 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.219103 kubelet[3180]: E1213 01:29:39.219085 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.219103 kubelet[3180]: W1213 01:29:39.219098 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.219627 kubelet[3180]: E1213 01:29:39.219585 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.219775 kubelet[3180]: E1213 01:29:39.219755 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.219775 kubelet[3180]: W1213 01:29:39.219769 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.219904 kubelet[3180]: E1213 01:29:39.219854 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.220735 kubelet[3180]: E1213 01:29:39.220713 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.220735 kubelet[3180]: W1213 01:29:39.220728 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.221008 kubelet[3180]: E1213 01:29:39.220983 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.222260 kubelet[3180]: E1213 01:29:39.222217 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.222260 kubelet[3180]: W1213 01:29:39.222235 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.223391 kubelet[3180]: E1213 01:29:39.222936 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.223391 kubelet[3180]: W1213 01:29:39.222952 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.223391 kubelet[3180]: E1213 01:29:39.223138 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.223391 kubelet[3180]: W1213 01:29:39.223146 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.223391 kubelet[3180]: E1213 01:29:39.223158 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.223391 kubelet[3180]: E1213 01:29:39.223180 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.223391 kubelet[3180]: E1213 01:29:39.223341 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.223391 kubelet[3180]: W1213 01:29:39.223349 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.223391 kubelet[3180]: E1213 01:29:39.223358 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.223391 kubelet[3180]: E1213 01:29:39.223371 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.226226 kubelet[3180]: E1213 01:29:39.226199 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:39.226226 kubelet[3180]: W1213 01:29:39.226219 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:39.226318 kubelet[3180]: E1213 01:29:39.226232 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:39.237209 containerd[1696]: time="2024-12-13T01:29:39.236980526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:29:39.237209 containerd[1696]: time="2024-12-13T01:29:39.237027886Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:29:39.237209 containerd[1696]: time="2024-12-13T01:29:39.237038766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:39.237209 containerd[1696]: time="2024-12-13T01:29:39.237109806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:29:39.252880 systemd[1]: Started cri-containerd-c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99.scope - libcontainer container c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99. Dec 13 01:29:39.279072 containerd[1696]: time="2024-12-13T01:29:39.278880242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-bbjfr,Uid:8d1b7d01-5ed8-47be-84f0-2c8dff431d86,Namespace:calico-system,Attempt:0,} returns sandbox id \"c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99\"" Dec 13 01:29:40.463494 kubelet[3180]: E1213 01:29:40.462177 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7bfm" podUID="a57b7da1-6e0a-4e25-9020-f599a3a71d0b" Dec 13 01:29:40.503838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1419406916.mount: Deactivated successfully. Dec 13 01:29:41.041834 containerd[1696]: time="2024-12-13T01:29:41.041761364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:41.045146 containerd[1696]: time="2024-12-13T01:29:41.045090483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Dec 13 01:29:41.049297 containerd[1696]: time="2024-12-13T01:29:41.049254763Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:41.054830 containerd[1696]: time="2024-12-13T01:29:41.054748642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:41.055942 containerd[1696]: time="2024-12-13T01:29:41.055392162Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 1.857460071s" Dec 13 01:29:41.055942 containerd[1696]: time="2024-12-13T01:29:41.055435202Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 01:29:41.056685 containerd[1696]: time="2024-12-13T01:29:41.056521162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:29:41.074196 containerd[1696]: time="2024-12-13T01:29:41.074160840Z" level=info msg="CreateContainer within sandbox \"f5445919e844ad9140b32a9b6338e9ed14d040f24faaeeb31bb5ae572163d43e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:29:41.120930 containerd[1696]: time="2024-12-13T01:29:41.120881995Z" level=info msg="CreateContainer within sandbox \"f5445919e844ad9140b32a9b6338e9ed14d040f24faaeeb31bb5ae572163d43e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"676ab5d2a60be01d3cc63116e91083c0c5c354832e1fbd91277074bd796b784c\"" Dec 13 01:29:41.121853 containerd[1696]: time="2024-12-13T01:29:41.121810435Z" level=info msg="StartContainer for \"676ab5d2a60be01d3cc63116e91083c0c5c354832e1fbd91277074bd796b784c\"" Dec 13 01:29:41.152921 systemd[1]: Started cri-containerd-676ab5d2a60be01d3cc63116e91083c0c5c354832e1fbd91277074bd796b784c.scope - libcontainer container 676ab5d2a60be01d3cc63116e91083c0c5c354832e1fbd91277074bd796b784c. Dec 13 01:29:41.187379 containerd[1696]: time="2024-12-13T01:29:41.187317667Z" level=info msg="StartContainer for \"676ab5d2a60be01d3cc63116e91083c0c5c354832e1fbd91277074bd796b784c\" returns successfully" Dec 13 01:29:41.581365 kubelet[3180]: I1213 01:29:41.581274 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-879897b5f-l8t9b" podStartSLOduration=1.720309592 podStartE2EDuration="3.581255903s" podCreationTimestamp="2024-12-13 01:29:38 +0000 UTC" firstStartedPulling="2024-12-13 01:29:39.195313931 +0000 UTC m=+22.820087050" lastFinishedPulling="2024-12-13 01:29:41.056260202 +0000 UTC m=+24.681033361" observedRunningTime="2024-12-13 01:29:41.580506623 +0000 UTC m=+25.205279742" watchObservedRunningTime="2024-12-13 01:29:41.581255903 +0000 UTC m=+25.206029022" Dec 13 01:29:41.617897 kubelet[3180]: E1213 01:29:41.617840 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.617897 kubelet[3180]: W1213 01:29:41.617865 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.617897 kubelet[3180]: E1213 01:29:41.617885 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.618164 kubelet[3180]: E1213 01:29:41.618065 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.618164 kubelet[3180]: W1213 01:29:41.618073 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.618164 kubelet[3180]: E1213 01:29:41.618082 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.618260 kubelet[3180]: E1213 01:29:41.618217 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.618260 kubelet[3180]: W1213 01:29:41.618225 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.618260 kubelet[3180]: E1213 01:29:41.618232 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.618393 kubelet[3180]: E1213 01:29:41.618355 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.618393 kubelet[3180]: W1213 01:29:41.618361 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.618393 kubelet[3180]: E1213 01:29:41.618369 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.618531 kubelet[3180]: E1213 01:29:41.618509 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.618531 kubelet[3180]: W1213 01:29:41.618521 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.618531 kubelet[3180]: E1213 01:29:41.618529 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.618677 kubelet[3180]: E1213 01:29:41.618648 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.618713 kubelet[3180]: W1213 01:29:41.618688 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.618713 kubelet[3180]: E1213 01:29:41.618698 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.618863 kubelet[3180]: E1213 01:29:41.618848 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.618863 kubelet[3180]: W1213 01:29:41.618860 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.618934 kubelet[3180]: E1213 01:29:41.618869 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.619011 kubelet[3180]: E1213 01:29:41.618997 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.619011 kubelet[3180]: W1213 01:29:41.619009 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.619109 kubelet[3180]: E1213 01:29:41.619017 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.619208 kubelet[3180]: E1213 01:29:41.619192 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.619208 kubelet[3180]: W1213 01:29:41.619205 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.619208 kubelet[3180]: E1213 01:29:41.619214 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.619364 kubelet[3180]: E1213 01:29:41.619349 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.619364 kubelet[3180]: W1213 01:29:41.619361 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.619427 kubelet[3180]: E1213 01:29:41.619372 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.619513 kubelet[3180]: E1213 01:29:41.619499 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.619513 kubelet[3180]: W1213 01:29:41.619510 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.619604 kubelet[3180]: E1213 01:29:41.619517 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.619668 kubelet[3180]: E1213 01:29:41.619645 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.619706 kubelet[3180]: W1213 01:29:41.619669 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.619706 kubelet[3180]: E1213 01:29:41.619678 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.619839 kubelet[3180]: E1213 01:29:41.619824 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.619839 kubelet[3180]: W1213 01:29:41.619837 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.619932 kubelet[3180]: E1213 01:29:41.619845 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.619981 kubelet[3180]: E1213 01:29:41.619972 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.619981 kubelet[3180]: W1213 01:29:41.619979 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.620071 kubelet[3180]: E1213 01:29:41.619987 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.620117 kubelet[3180]: E1213 01:29:41.620112 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.620159 kubelet[3180]: W1213 01:29:41.620118 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.620159 kubelet[3180]: E1213 01:29:41.620126 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.632546 kubelet[3180]: E1213 01:29:41.632525 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.632753 kubelet[3180]: W1213 01:29:41.632584 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.632753 kubelet[3180]: E1213 01:29:41.632602 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.633072 kubelet[3180]: E1213 01:29:41.632984 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.633072 kubelet[3180]: W1213 01:29:41.632997 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.633072 kubelet[3180]: E1213 01:29:41.633016 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.633204 kubelet[3180]: E1213 01:29:41.633177 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.633204 kubelet[3180]: W1213 01:29:41.633201 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.633266 kubelet[3180]: E1213 01:29:41.633221 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.633388 kubelet[3180]: E1213 01:29:41.633365 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.633388 kubelet[3180]: W1213 01:29:41.633379 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.633464 kubelet[3180]: E1213 01:29:41.633395 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.633554 kubelet[3180]: E1213 01:29:41.633535 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.633554 kubelet[3180]: W1213 01:29:41.633549 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.633751 kubelet[3180]: E1213 01:29:41.633563 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.633839 kubelet[3180]: E1213 01:29:41.633825 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.633896 kubelet[3180]: W1213 01:29:41.633886 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.634083 kubelet[3180]: E1213 01:29:41.633954 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.634195 kubelet[3180]: E1213 01:29:41.634183 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.634256 kubelet[3180]: W1213 01:29:41.634246 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.634322 kubelet[3180]: E1213 01:29:41.634311 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.634582 kubelet[3180]: E1213 01:29:41.634561 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.634582 kubelet[3180]: W1213 01:29:41.634577 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.634672 kubelet[3180]: E1213 01:29:41.634593 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.634849 kubelet[3180]: E1213 01:29:41.634750 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.634849 kubelet[3180]: W1213 01:29:41.634765 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.634849 kubelet[3180]: E1213 01:29:41.634774 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.635364 kubelet[3180]: E1213 01:29:41.634941 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.635364 kubelet[3180]: W1213 01:29:41.634952 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.635364 kubelet[3180]: E1213 01:29:41.634971 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.635364 kubelet[3180]: E1213 01:29:41.635194 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.635364 kubelet[3180]: W1213 01:29:41.635204 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.635364 kubelet[3180]: E1213 01:29:41.635214 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.635364 kubelet[3180]: E1213 01:29:41.635334 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.635364 kubelet[3180]: W1213 01:29:41.635340 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.635364 kubelet[3180]: E1213 01:29:41.635348 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.635556 kubelet[3180]: E1213 01:29:41.635488 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.635556 kubelet[3180]: W1213 01:29:41.635496 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.635556 kubelet[3180]: E1213 01:29:41.635504 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.635853 kubelet[3180]: E1213 01:29:41.635838 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.635933 kubelet[3180]: W1213 01:29:41.635922 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.636002 kubelet[3180]: E1213 01:29:41.635990 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.636262 kubelet[3180]: E1213 01:29:41.636240 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.636262 kubelet[3180]: W1213 01:29:41.636257 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.636350 kubelet[3180]: E1213 01:29:41.636274 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.636453 kubelet[3180]: E1213 01:29:41.636437 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.636453 kubelet[3180]: W1213 01:29:41.636450 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.636517 kubelet[3180]: E1213 01:29:41.636467 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.636632 kubelet[3180]: E1213 01:29:41.636617 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.636632 kubelet[3180]: W1213 01:29:41.636630 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.636719 kubelet[3180]: E1213 01:29:41.636638 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:41.637002 kubelet[3180]: E1213 01:29:41.636983 3180 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:29:41.637002 kubelet[3180]: W1213 01:29:41.636999 3180 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:29:41.637069 kubelet[3180]: E1213 01:29:41.637010 3180 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:29:42.334780 containerd[1696]: time="2024-12-13T01:29:42.334726335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.337332 containerd[1696]: time="2024-12-13T01:29:42.337197775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Dec 13 01:29:42.346611 containerd[1696]: time="2024-12-13T01:29:42.346508413Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.355029 containerd[1696]: time="2024-12-13T01:29:42.354947812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:42.355752 containerd[1696]: time="2024-12-13T01:29:42.355555132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.29898913s" Dec 13 01:29:42.355752 containerd[1696]: time="2024-12-13T01:29:42.355592772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 01:29:42.359331 containerd[1696]: time="2024-12-13T01:29:42.359279452Z" level=info msg="CreateContainer within sandbox \"c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:29:42.413393 containerd[1696]: time="2024-12-13T01:29:42.413234405Z" level=info msg="CreateContainer within sandbox \"c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b\"" Dec 13 01:29:42.414148 containerd[1696]: time="2024-12-13T01:29:42.413945285Z" level=info msg="StartContainer for \"eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b\"" Dec 13 01:29:42.457882 systemd[1]: Started cri-containerd-eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b.scope - libcontainer container eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b. Dec 13 01:29:42.463743 kubelet[3180]: E1213 01:29:42.463343 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7bfm" podUID="a57b7da1-6e0a-4e25-9020-f599a3a71d0b" Dec 13 01:29:42.497352 containerd[1696]: time="2024-12-13T01:29:42.497234474Z" level=info msg="StartContainer for \"eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b\" returns successfully" Dec 13 01:29:42.506807 systemd[1]: cri-containerd-eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b.scope: Deactivated successfully. Dec 13 01:29:42.528523 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b-rootfs.mount: Deactivated successfully. Dec 13 01:29:42.572650 kubelet[3180]: I1213 01:29:42.571784 3180 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:29:43.361438 containerd[1696]: time="2024-12-13T01:29:43.361382284Z" level=info msg="shim disconnected" id=eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b namespace=k8s.io Dec 13 01:29:43.362155 containerd[1696]: time="2024-12-13T01:29:43.361736644Z" level=warning msg="cleaning up after shim disconnected" id=eea5892db56d02edfaab5c29b1caa574f7840afc9fecdd82885f5c9f131f205b namespace=k8s.io Dec 13 01:29:43.362155 containerd[1696]: time="2024-12-13T01:29:43.361753204Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:43.576787 containerd[1696]: time="2024-12-13T01:29:43.576484177Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:29:44.462974 kubelet[3180]: E1213 01:29:44.462931 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7bfm" podUID="a57b7da1-6e0a-4e25-9020-f599a3a71d0b" Dec 13 01:29:46.235830 containerd[1696]: time="2024-12-13T01:29:46.235766558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:46.237872 containerd[1696]: time="2024-12-13T01:29:46.237825038Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Dec 13 01:29:46.242206 containerd[1696]: time="2024-12-13T01:29:46.242134677Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:46.249338 containerd[1696]: time="2024-12-13T01:29:46.249270756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:46.250245 containerd[1696]: time="2024-12-13T01:29:46.249967396Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 2.673442219s" Dec 13 01:29:46.250245 containerd[1696]: time="2024-12-13T01:29:46.250002956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 01:29:46.253829 containerd[1696]: time="2024-12-13T01:29:46.253554756Z" level=info msg="CreateContainer within sandbox \"c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:29:46.297868 containerd[1696]: time="2024-12-13T01:29:46.297822510Z" level=info msg="CreateContainer within sandbox \"c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f\"" Dec 13 01:29:46.298604 containerd[1696]: time="2024-12-13T01:29:46.298418750Z" level=info msg="StartContainer for \"0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f\"" Dec 13 01:29:46.332822 systemd[1]: Started cri-containerd-0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f.scope - libcontainer container 0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f. Dec 13 01:29:46.362167 containerd[1696]: time="2024-12-13T01:29:46.362123582Z" level=info msg="StartContainer for \"0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f\" returns successfully" Dec 13 01:29:46.462239 kubelet[3180]: E1213 01:29:46.462123 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z7bfm" podUID="a57b7da1-6e0a-4e25-9020-f599a3a71d0b" Dec 13 01:29:47.463910 containerd[1696]: time="2024-12-13T01:29:47.463846041Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:29:47.466610 systemd[1]: cri-containerd-0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f.scope: Deactivated successfully. Dec 13 01:29:47.486397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f-rootfs.mount: Deactivated successfully. Dec 13 01:29:47.556901 kubelet[3180]: I1213 01:29:47.556866 3180 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:29:47.835578 kubelet[3180]: I1213 01:29:47.587086 3180 topology_manager.go:215] "Topology Admit Handler" podUID="6371d89c-b939-4616-95e1-8dfb9a85f504" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9mvzk" Dec 13 01:29:47.835578 kubelet[3180]: I1213 01:29:47.592728 3180 topology_manager.go:215] "Topology Admit Handler" podUID="a04b6ce3-5f74-4242-b4cb-d590307c3dbb" podNamespace="calico-system" podName="calico-kube-controllers-75778b7f7b-qcllh" Dec 13 01:29:47.835578 kubelet[3180]: I1213 01:29:47.601428 3180 topology_manager.go:215] "Topology Admit Handler" podUID="f2830d02-b780-4c40-8169-2cb412cf67f7" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qlsjk" Dec 13 01:29:47.835578 kubelet[3180]: I1213 01:29:47.608787 3180 topology_manager.go:215] "Topology Admit Handler" podUID="fba0a9d1-bb75-416a-9f26-d9707b860c3d" podNamespace="calico-apiserver" podName="calico-apiserver-6cd5658d56-mlhmd" Dec 13 01:29:47.835578 kubelet[3180]: I1213 01:29:47.609817 3180 topology_manager.go:215] "Topology Admit Handler" podUID="7833e1a8-e541-4910-80f7-3565f077e0a5" podNamespace="calico-apiserver" podName="calico-apiserver-6cd5658d56-plblc" Dec 13 01:29:47.835578 kubelet[3180]: I1213 01:29:47.676800 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a04b6ce3-5f74-4242-b4cb-d590307c3dbb-tigera-ca-bundle\") pod \"calico-kube-controllers-75778b7f7b-qcllh\" (UID: \"a04b6ce3-5f74-4242-b4cb-d590307c3dbb\") " pod="calico-system/calico-kube-controllers-75778b7f7b-qcllh" Dec 13 01:29:47.835578 kubelet[3180]: I1213 01:29:47.676861 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqvht\" (UniqueName: \"kubernetes.io/projected/6371d89c-b939-4616-95e1-8dfb9a85f504-kube-api-access-tqvht\") pod \"coredns-7db6d8ff4d-9mvzk\" (UID: \"6371d89c-b939-4616-95e1-8dfb9a85f504\") " pod="kube-system/coredns-7db6d8ff4d-9mvzk" Dec 13 01:29:47.598885 systemd[1]: Created slice kubepods-burstable-pod6371d89c_b939_4616_95e1_8dfb9a85f504.slice - libcontainer container kubepods-burstable-pod6371d89c_b939_4616_95e1_8dfb9a85f504.slice. Dec 13 01:29:47.835955 kubelet[3180]: I1213 01:29:47.676881 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45b9n\" (UniqueName: \"kubernetes.io/projected/f2830d02-b780-4c40-8169-2cb412cf67f7-kube-api-access-45b9n\") pod \"coredns-7db6d8ff4d-qlsjk\" (UID: \"f2830d02-b780-4c40-8169-2cb412cf67f7\") " pod="kube-system/coredns-7db6d8ff4d-qlsjk" Dec 13 01:29:47.835955 kubelet[3180]: I1213 01:29:47.676898 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6371d89c-b939-4616-95e1-8dfb9a85f504-config-volume\") pod \"coredns-7db6d8ff4d-9mvzk\" (UID: \"6371d89c-b939-4616-95e1-8dfb9a85f504\") " pod="kube-system/coredns-7db6d8ff4d-9mvzk" Dec 13 01:29:47.835955 kubelet[3180]: I1213 01:29:47.676918 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmw9c\" (UniqueName: \"kubernetes.io/projected/7833e1a8-e541-4910-80f7-3565f077e0a5-kube-api-access-kmw9c\") pod \"calico-apiserver-6cd5658d56-plblc\" (UID: \"7833e1a8-e541-4910-80f7-3565f077e0a5\") " pod="calico-apiserver/calico-apiserver-6cd5658d56-plblc" Dec 13 01:29:47.835955 kubelet[3180]: I1213 01:29:47.676943 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gpf9\" (UniqueName: \"kubernetes.io/projected/fba0a9d1-bb75-416a-9f26-d9707b860c3d-kube-api-access-2gpf9\") pod \"calico-apiserver-6cd5658d56-mlhmd\" (UID: \"fba0a9d1-bb75-416a-9f26-d9707b860c3d\") " pod="calico-apiserver/calico-apiserver-6cd5658d56-mlhmd" Dec 13 01:29:47.835955 kubelet[3180]: I1213 01:29:47.676959 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7833e1a8-e541-4910-80f7-3565f077e0a5-calico-apiserver-certs\") pod \"calico-apiserver-6cd5658d56-plblc\" (UID: \"7833e1a8-e541-4910-80f7-3565f077e0a5\") " pod="calico-apiserver/calico-apiserver-6cd5658d56-plblc" Dec 13 01:29:47.611126 systemd[1]: Created slice kubepods-besteffort-poda04b6ce3_5f74_4242_b4cb_d590307c3dbb.slice - libcontainer container kubepods-besteffort-poda04b6ce3_5f74_4242_b4cb_d590307c3dbb.slice. Dec 13 01:29:47.836107 kubelet[3180]: I1213 01:29:47.676976 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fba0a9d1-bb75-416a-9f26-d9707b860c3d-calico-apiserver-certs\") pod \"calico-apiserver-6cd5658d56-mlhmd\" (UID: \"fba0a9d1-bb75-416a-9f26-d9707b860c3d\") " pod="calico-apiserver/calico-apiserver-6cd5658d56-mlhmd" Dec 13 01:29:47.836107 kubelet[3180]: I1213 01:29:47.676992 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2830d02-b780-4c40-8169-2cb412cf67f7-config-volume\") pod \"coredns-7db6d8ff4d-qlsjk\" (UID: \"f2830d02-b780-4c40-8169-2cb412cf67f7\") " pod="kube-system/coredns-7db6d8ff4d-qlsjk" Dec 13 01:29:47.836107 kubelet[3180]: I1213 01:29:47.677008 3180 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdhqc\" (UniqueName: \"kubernetes.io/projected/a04b6ce3-5f74-4242-b4cb-d590307c3dbb-kube-api-access-hdhqc\") pod \"calico-kube-controllers-75778b7f7b-qcllh\" (UID: \"a04b6ce3-5f74-4242-b4cb-d590307c3dbb\") " pod="calico-system/calico-kube-controllers-75778b7f7b-qcllh" Dec 13 01:29:47.620133 systemd[1]: Created slice kubepods-burstable-podf2830d02_b780_4c40_8169_2cb412cf67f7.slice - libcontainer container kubepods-burstable-podf2830d02_b780_4c40_8169_2cb412cf67f7.slice. Dec 13 01:29:47.624484 systemd[1]: Created slice kubepods-besteffort-pod7833e1a8_e541_4910_80f7_3565f077e0a5.slice - libcontainer container kubepods-besteffort-pod7833e1a8_e541_4910_80f7_3565f077e0a5.slice. Dec 13 01:29:47.634618 systemd[1]: Created slice kubepods-besteffort-podfba0a9d1_bb75_416a_9f26_d9707b860c3d.slice - libcontainer container kubepods-besteffort-podfba0a9d1_bb75_416a_9f26_d9707b860c3d.slice. Dec 13 01:29:48.468315 systemd[1]: Created slice kubepods-besteffort-poda57b7da1_6e0a_4e25_9020_f599a3a71d0b.slice - libcontainer container kubepods-besteffort-poda57b7da1_6e0a_4e25_9020_f599a3a71d0b.slice. Dec 13 01:29:48.470798 containerd[1696]: time="2024-12-13T01:29:48.470759913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7bfm,Uid:a57b7da1-6e0a-4e25-9020-f599a3a71d0b,Namespace:calico-system,Attempt:0,}" Dec 13 01:29:48.589654 containerd[1696]: time="2024-12-13T01:29:48.589580538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9mvzk,Uid:6371d89c-b939-4616-95e1-8dfb9a85f504,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:48.591849 containerd[1696]: time="2024-12-13T01:29:48.591735538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qlsjk,Uid:f2830d02-b780-4c40-8169-2cb412cf67f7,Namespace:kube-system,Attempt:0,}" Dec 13 01:29:48.597862 containerd[1696]: time="2024-12-13T01:29:48.597548457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75778b7f7b-qcllh,Uid:a04b6ce3-5f74-4242-b4cb-d590307c3dbb,Namespace:calico-system,Attempt:0,}" Dec 13 01:29:48.597978 containerd[1696]: time="2024-12-13T01:29:48.597864457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd5658d56-plblc,Uid:7833e1a8-e541-4910-80f7-3565f077e0a5,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:29:48.598277 containerd[1696]: time="2024-12-13T01:29:48.598249137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd5658d56-mlhmd,Uid:fba0a9d1-bb75-416a-9f26-d9707b860c3d,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:29:48.648851 containerd[1696]: time="2024-12-13T01:29:48.648780930Z" level=info msg="shim disconnected" id=0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f namespace=k8s.io Dec 13 01:29:48.648851 containerd[1696]: time="2024-12-13T01:29:48.648842410Z" level=warning msg="cleaning up after shim disconnected" id=0b778d8cb3e5fadd890141be4a27146a064ce793855a340f16262dcf0e0aed1f namespace=k8s.io Dec 13 01:29:48.648851 containerd[1696]: time="2024-12-13T01:29:48.648851010Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:29:48.942917 containerd[1696]: time="2024-12-13T01:29:48.942853853Z" level=error msg="Failed to destroy network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:48.945496 containerd[1696]: time="2024-12-13T01:29:48.945434493Z" level=error msg="encountered an error cleaning up failed sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:48.945604 containerd[1696]: time="2024-12-13T01:29:48.945523812Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7bfm,Uid:a57b7da1-6e0a-4e25-9020-f599a3a71d0b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:48.946713 kubelet[3180]: E1213 01:29:48.946656 3180 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:48.948114 kubelet[3180]: E1213 01:29:48.947087 3180 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7bfm" Dec 13 01:29:48.948114 kubelet[3180]: E1213 01:29:48.947734 3180 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z7bfm" Dec 13 01:29:48.948114 kubelet[3180]: E1213 01:29:48.947824 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z7bfm_calico-system(a57b7da1-6e0a-4e25-9020-f599a3a71d0b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z7bfm_calico-system(a57b7da1-6e0a-4e25-9020-f599a3a71d0b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7bfm" podUID="a57b7da1-6e0a-4e25-9020-f599a3a71d0b" Dec 13 01:29:49.035260 containerd[1696]: time="2024-12-13T01:29:49.035213281Z" level=error msg="Failed to destroy network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.035791 containerd[1696]: time="2024-12-13T01:29:49.035762441Z" level=error msg="encountered an error cleaning up failed sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.035952 containerd[1696]: time="2024-12-13T01:29:49.035928321Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qlsjk,Uid:f2830d02-b780-4c40-8169-2cb412cf67f7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.037436 kubelet[3180]: E1213 01:29:49.037395 3180 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.037526 kubelet[3180]: E1213 01:29:49.037453 3180 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qlsjk" Dec 13 01:29:49.037526 kubelet[3180]: E1213 01:29:49.037482 3180 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qlsjk" Dec 13 01:29:49.037581 kubelet[3180]: E1213 01:29:49.037524 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qlsjk_kube-system(f2830d02-b780-4c40-8169-2cb412cf67f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qlsjk_kube-system(f2830d02-b780-4c40-8169-2cb412cf67f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qlsjk" podUID="f2830d02-b780-4c40-8169-2cb412cf67f7" Dec 13 01:29:49.043752 containerd[1696]: time="2024-12-13T01:29:49.043704000Z" level=error msg="Failed to destroy network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.044820 containerd[1696]: time="2024-12-13T01:29:49.044784280Z" level=error msg="encountered an error cleaning up failed sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.045491 containerd[1696]: time="2024-12-13T01:29:49.045369960Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd5658d56-mlhmd,Uid:fba0a9d1-bb75-416a-9f26-d9707b860c3d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.045892 kubelet[3180]: E1213 01:29:49.045853 3180 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.045995 kubelet[3180]: E1213 01:29:49.045949 3180 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd5658d56-mlhmd" Dec 13 01:29:49.045995 kubelet[3180]: E1213 01:29:49.045973 3180 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd5658d56-mlhmd" Dec 13 01:29:49.046790 kubelet[3180]: E1213 01:29:49.046018 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd5658d56-mlhmd_calico-apiserver(fba0a9d1-bb75-416a-9f26-d9707b860c3d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd5658d56-mlhmd_calico-apiserver(fba0a9d1-bb75-416a-9f26-d9707b860c3d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd5658d56-mlhmd" podUID="fba0a9d1-bb75-416a-9f26-d9707b860c3d" Dec 13 01:29:49.060324 containerd[1696]: time="2024-12-13T01:29:49.060278398Z" level=error msg="Failed to destroy network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.060842 containerd[1696]: time="2024-12-13T01:29:49.060790518Z" level=error msg="encountered an error cleaning up failed sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.060981 containerd[1696]: time="2024-12-13T01:29:49.060958878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd5658d56-plblc,Uid:7833e1a8-e541-4910-80f7-3565f077e0a5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.061544 kubelet[3180]: E1213 01:29:49.061313 3180 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.061544 kubelet[3180]: E1213 01:29:49.061366 3180 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd5658d56-plblc" Dec 13 01:29:49.061544 kubelet[3180]: E1213 01:29:49.061397 3180 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6cd5658d56-plblc" Dec 13 01:29:49.061687 kubelet[3180]: E1213 01:29:49.061441 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6cd5658d56-plblc_calico-apiserver(7833e1a8-e541-4910-80f7-3565f077e0a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6cd5658d56-plblc_calico-apiserver(7833e1a8-e541-4910-80f7-3565f077e0a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd5658d56-plblc" podUID="7833e1a8-e541-4910-80f7-3565f077e0a5" Dec 13 01:29:49.063337 containerd[1696]: time="2024-12-13T01:29:49.063294677Z" level=error msg="Failed to destroy network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.063739 containerd[1696]: time="2024-12-13T01:29:49.063705317Z" level=error msg="encountered an error cleaning up failed sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.063823 containerd[1696]: time="2024-12-13T01:29:49.063760397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75778b7f7b-qcllh,Uid:a04b6ce3-5f74-4242-b4cb-d590307c3dbb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.064225 kubelet[3180]: E1213 01:29:49.064060 3180 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.064225 kubelet[3180]: E1213 01:29:49.064113 3180 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75778b7f7b-qcllh" Dec 13 01:29:49.064225 kubelet[3180]: E1213 01:29:49.064140 3180 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-75778b7f7b-qcllh" Dec 13 01:29:49.064340 kubelet[3180]: E1213 01:29:49.064185 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-75778b7f7b-qcllh_calico-system(a04b6ce3-5f74-4242-b4cb-d590307c3dbb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-75778b7f7b-qcllh_calico-system(a04b6ce3-5f74-4242-b4cb-d590307c3dbb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75778b7f7b-qcllh" podUID="a04b6ce3-5f74-4242-b4cb-d590307c3dbb" Dec 13 01:29:49.069017 containerd[1696]: time="2024-12-13T01:29:49.068964117Z" level=error msg="Failed to destroy network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.069299 containerd[1696]: time="2024-12-13T01:29:49.069270557Z" level=error msg="encountered an error cleaning up failed sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.069373 containerd[1696]: time="2024-12-13T01:29:49.069343517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9mvzk,Uid:6371d89c-b939-4616-95e1-8dfb9a85f504,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.069556 kubelet[3180]: E1213 01:29:49.069521 3180 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.069644 kubelet[3180]: E1213 01:29:49.069574 3180 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9mvzk" Dec 13 01:29:49.069644 kubelet[3180]: E1213 01:29:49.069592 3180 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9mvzk" Dec 13 01:29:49.069972 kubelet[3180]: E1213 01:29:49.069635 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9mvzk_kube-system(6371d89c-b939-4616-95e1-8dfb9a85f504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9mvzk_kube-system(6371d89c-b939-4616-95e1-8dfb9a85f504)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9mvzk" podUID="6371d89c-b939-4616-95e1-8dfb9a85f504" Dec 13 01:29:49.593297 kubelet[3180]: I1213 01:29:49.593260 3180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:29:49.593967 containerd[1696]: time="2024-12-13T01:29:49.593922530Z" level=info msg="StopPodSandbox for \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\"" Dec 13 01:29:49.594249 containerd[1696]: time="2024-12-13T01:29:49.594102010Z" level=info msg="Ensure that sandbox c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf in task-service has been cleanup successfully" Dec 13 01:29:49.598275 kubelet[3180]: I1213 01:29:49.597175 3180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:29:49.598422 containerd[1696]: time="2024-12-13T01:29:49.597776569Z" level=info msg="StopPodSandbox for \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\"" Dec 13 01:29:49.598422 containerd[1696]: time="2024-12-13T01:29:49.597950169Z" level=info msg="Ensure that sandbox ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98 in task-service has been cleanup successfully" Dec 13 01:29:49.602224 kubelet[3180]: I1213 01:29:49.602101 3180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:29:49.603369 containerd[1696]: time="2024-12-13T01:29:49.603308449Z" level=info msg="StopPodSandbox for \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\"" Dec 13 01:29:49.604100 containerd[1696]: time="2024-12-13T01:29:49.604061009Z" level=info msg="Ensure that sandbox cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837 in task-service has been cleanup successfully" Dec 13 01:29:49.614016 containerd[1696]: time="2024-12-13T01:29:49.613841767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:29:49.617889 kubelet[3180]: I1213 01:29:49.617307 3180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:29:49.620647 containerd[1696]: time="2024-12-13T01:29:49.619832647Z" level=info msg="StopPodSandbox for \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\"" Dec 13 01:29:49.622187 containerd[1696]: time="2024-12-13T01:29:49.621160926Z" level=info msg="Ensure that sandbox 1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94 in task-service has been cleanup successfully" Dec 13 01:29:49.627371 kubelet[3180]: I1213 01:29:49.627008 3180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:29:49.628983 containerd[1696]: time="2024-12-13T01:29:49.628856925Z" level=info msg="StopPodSandbox for \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\"" Dec 13 01:29:49.631559 containerd[1696]: time="2024-12-13T01:29:49.629912925Z" level=info msg="Ensure that sandbox 657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4 in task-service has been cleanup successfully" Dec 13 01:29:49.644403 kubelet[3180]: I1213 01:29:49.643572 3180 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:29:49.645394 containerd[1696]: time="2024-12-13T01:29:49.645340043Z" level=info msg="StopPodSandbox for \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\"" Dec 13 01:29:49.645543 containerd[1696]: time="2024-12-13T01:29:49.645518323Z" level=info msg="Ensure that sandbox 86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f in task-service has been cleanup successfully" Dec 13 01:29:49.684334 containerd[1696]: time="2024-12-13T01:29:49.684269318Z" level=error msg="StopPodSandbox for \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\" failed" error="failed to destroy network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.684673 kubelet[3180]: E1213 01:29:49.684623 3180 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:29:49.684958 kubelet[3180]: E1213 01:29:49.684801 3180 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf"} Dec 13 01:29:49.684958 kubelet[3180]: E1213 01:29:49.684869 3180 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7833e1a8-e541-4910-80f7-3565f077e0a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:29:49.684958 kubelet[3180]: E1213 01:29:49.684929 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7833e1a8-e541-4910-80f7-3565f077e0a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd5658d56-plblc" podUID="7833e1a8-e541-4910-80f7-3565f077e0a5" Dec 13 01:29:49.698498 containerd[1696]: time="2024-12-13T01:29:49.698446317Z" level=error msg="StopPodSandbox for \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\" failed" error="failed to destroy network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.699176 kubelet[3180]: E1213 01:29:49.699137 3180 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:29:49.699323 kubelet[3180]: E1213 01:29:49.699300 3180 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837"} Dec 13 01:29:49.699729 containerd[1696]: time="2024-12-13T01:29:49.699601556Z" level=error msg="StopPodSandbox for \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\" failed" error="failed to destroy network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.699861 kubelet[3180]: E1213 01:29:49.699839 3180 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a57b7da1-6e0a-4e25-9020-f599a3a71d0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:29:49.699993 kubelet[3180]: E1213 01:29:49.699972 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a57b7da1-6e0a-4e25-9020-f599a3a71d0b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z7bfm" podUID="a57b7da1-6e0a-4e25-9020-f599a3a71d0b" Dec 13 01:29:49.700080 kubelet[3180]: E1213 01:29:49.699877 3180 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:29:49.700184 kubelet[3180]: E1213 01:29:49.700167 3180 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98"} Dec 13 01:29:49.700261 kubelet[3180]: E1213 01:29:49.700248 3180 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f2830d02-b780-4c40-8169-2cb412cf67f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:29:49.700340 kubelet[3180]: E1213 01:29:49.700324 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f2830d02-b780-4c40-8169-2cb412cf67f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qlsjk" podUID="f2830d02-b780-4c40-8169-2cb412cf67f7" Dec 13 01:29:49.719036 containerd[1696]: time="2024-12-13T01:29:49.718971394Z" level=error msg="StopPodSandbox for \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\" failed" error="failed to destroy network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.719477 kubelet[3180]: E1213 01:29:49.719433 3180 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:29:49.719477 kubelet[3180]: E1213 01:29:49.719487 3180 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94"} Dec 13 01:29:49.719696 kubelet[3180]: E1213 01:29:49.719573 3180 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fba0a9d1-bb75-416a-9f26-d9707b860c3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:29:49.719696 kubelet[3180]: E1213 01:29:49.719603 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fba0a9d1-bb75-416a-9f26-d9707b860c3d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6cd5658d56-mlhmd" podUID="fba0a9d1-bb75-416a-9f26-d9707b860c3d" Dec 13 01:29:49.721021 containerd[1696]: time="2024-12-13T01:29:49.720977034Z" level=error msg="StopPodSandbox for \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\" failed" error="failed to destroy network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.721718 kubelet[3180]: E1213 01:29:49.721291 3180 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:29:49.721718 kubelet[3180]: E1213 01:29:49.721358 3180 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f"} Dec 13 01:29:49.721718 kubelet[3180]: E1213 01:29:49.721386 3180 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6371d89c-b939-4616-95e1-8dfb9a85f504\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:29:49.721718 kubelet[3180]: E1213 01:29:49.721405 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6371d89c-b939-4616-95e1-8dfb9a85f504\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9mvzk" podUID="6371d89c-b939-4616-95e1-8dfb9a85f504" Dec 13 01:29:49.723064 containerd[1696]: time="2024-12-13T01:29:49.723019393Z" level=error msg="StopPodSandbox for \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\" failed" error="failed to destroy network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:29:49.723264 kubelet[3180]: E1213 01:29:49.723220 3180 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:29:49.723306 kubelet[3180]: E1213 01:29:49.723260 3180 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4"} Dec 13 01:29:49.723306 kubelet[3180]: E1213 01:29:49.723296 3180 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a04b6ce3-5f74-4242-b4cb-d590307c3dbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:29:49.723405 kubelet[3180]: E1213 01:29:49.723315 3180 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a04b6ce3-5f74-4242-b4cb-d590307c3dbb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-75778b7f7b-qcllh" podUID="a04b6ce3-5f74-4242-b4cb-d590307c3dbb" Dec 13 01:29:49.796345 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98-shm.mount: Deactivated successfully. Dec 13 01:29:49.796434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f-shm.mount: Deactivated successfully. Dec 13 01:29:49.796482 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837-shm.mount: Deactivated successfully. Dec 13 01:29:53.730597 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount574512496.mount: Deactivated successfully. Dec 13 01:29:53.869359 containerd[1696]: time="2024-12-13T01:29:53.869296702Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:53.872486 containerd[1696]: time="2024-12-13T01:29:53.872324781Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Dec 13 01:29:53.877350 containerd[1696]: time="2024-12-13T01:29:53.877201140Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:53.882070 containerd[1696]: time="2024-12-13T01:29:53.882004458Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:29:53.882993 containerd[1696]: time="2024-12-13T01:29:53.882542698Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 4.268551651s" Dec 13 01:29:53.882993 containerd[1696]: time="2024-12-13T01:29:53.882581498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 01:29:53.896427 containerd[1696]: time="2024-12-13T01:29:53.896376695Z" level=info msg="CreateContainer within sandbox \"c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:29:53.939796 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457239659.mount: Deactivated successfully. Dec 13 01:29:53.952600 containerd[1696]: time="2024-12-13T01:29:53.952526201Z" level=info msg="CreateContainer within sandbox \"c64a12145d9aaf58e0c8fcd9daba7f06be8d10ec52ef96c22748ab4e5d1e2f99\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c3cf5ce76f5f317d7bf08ea217ab07b484229501ab1c45ec68190cec66bca082\"" Dec 13 01:29:53.953477 containerd[1696]: time="2024-12-13T01:29:53.953308761Z" level=info msg="StartContainer for \"c3cf5ce76f5f317d7bf08ea217ab07b484229501ab1c45ec68190cec66bca082\"" Dec 13 01:29:53.982865 systemd[1]: Started cri-containerd-c3cf5ce76f5f317d7bf08ea217ab07b484229501ab1c45ec68190cec66bca082.scope - libcontainer container c3cf5ce76f5f317d7bf08ea217ab07b484229501ab1c45ec68190cec66bca082. Dec 13 01:29:54.013987 containerd[1696]: time="2024-12-13T01:29:54.013918666Z" level=info msg="StartContainer for \"c3cf5ce76f5f317d7bf08ea217ab07b484229501ab1c45ec68190cec66bca082\" returns successfully" Dec 13 01:29:54.234914 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:29:54.235060 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:29:54.673403 kubelet[3180]: I1213 01:29:54.673323 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-bbjfr" podStartSLOduration=2.071903648 podStartE2EDuration="16.673292665s" podCreationTimestamp="2024-12-13 01:29:38 +0000 UTC" firstStartedPulling="2024-12-13 01:29:39.281957681 +0000 UTC m=+22.906730800" lastFinishedPulling="2024-12-13 01:29:53.883346698 +0000 UTC m=+37.508119817" observedRunningTime="2024-12-13 01:29:54.671785305 +0000 UTC m=+38.296558424" watchObservedRunningTime="2024-12-13 01:29:54.673292665 +0000 UTC m=+38.298065784" Dec 13 01:30:01.463838 containerd[1696]: time="2024-12-13T01:30:01.462562512Z" level=info msg="StopPodSandbox for \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\"" Dec 13 01:30:01.463838 containerd[1696]: time="2024-12-13T01:30:01.462825632Z" level=info msg="StopPodSandbox for \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\"" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.529 [INFO][4559] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.529 [INFO][4559] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" iface="eth0" netns="/var/run/netns/cni-736daa32-a49e-36e4-9e3a-e971e29f1301" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.531 [INFO][4559] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" iface="eth0" netns="/var/run/netns/cni-736daa32-a49e-36e4-9e3a-e971e29f1301" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.532 [INFO][4559] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" iface="eth0" netns="/var/run/netns/cni-736daa32-a49e-36e4-9e3a-e971e29f1301" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.532 [INFO][4559] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.532 [INFO][4559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.562 [INFO][4571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.563 [INFO][4571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.563 [INFO][4571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.572 [WARNING][4571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.572 [INFO][4571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.574 [INFO][4571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:01.576957 containerd[1696]: 2024-12-13 01:30:01.575 [INFO][4559] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:01.577874 containerd[1696]: time="2024-12-13T01:30:01.577609033Z" level=info msg="TearDown network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\" successfully" Dec 13 01:30:01.577874 containerd[1696]: time="2024-12-13T01:30:01.577646313Z" level=info msg="StopPodSandbox for \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\" returns successfully" Dec 13 01:30:01.580147 systemd[1]: run-netns-cni\x2d736daa32\x2da49e\x2d36e4\x2d9e3a\x2de971e29f1301.mount: Deactivated successfully. Dec 13 01:30:01.583106 containerd[1696]: time="2024-12-13T01:30:01.583036713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9mvzk,Uid:6371d89c-b939-4616-95e1-8dfb9a85f504,Namespace:kube-system,Attempt:1,}" Dec 13 01:30:01.585250 kubelet[3180]: I1213 01:30:01.585142 3180 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.537 [INFO][4558] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.538 [INFO][4558] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" iface="eth0" netns="/var/run/netns/cni-ef08c32a-dbfc-aa54-24f6-a9a10c454591" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.538 [INFO][4558] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" iface="eth0" netns="/var/run/netns/cni-ef08c32a-dbfc-aa54-24f6-a9a10c454591" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.538 [INFO][4558] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" iface="eth0" netns="/var/run/netns/cni-ef08c32a-dbfc-aa54-24f6-a9a10c454591" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.538 [INFO][4558] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.538 [INFO][4558] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.571 [INFO][4577] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.571 [INFO][4577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.573 [INFO][4577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.588 [WARNING][4577] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.588 [INFO][4577] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.590 [INFO][4577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:01.595646 containerd[1696]: 2024-12-13 01:30:01.592 [INFO][4558] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:01.595646 containerd[1696]: time="2024-12-13T01:30:01.593754034Z" level=info msg="TearDown network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\" successfully" Dec 13 01:30:01.595646 containerd[1696]: time="2024-12-13T01:30:01.593777274Z" level=info msg="StopPodSandbox for \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\" returns successfully" Dec 13 01:30:01.596789 systemd[1]: run-netns-cni\x2def08c32a\x2ddbfc\x2daa54\x2d24f6\x2da9a10c454591.mount: Deactivated successfully. Dec 13 01:30:01.597011 containerd[1696]: time="2024-12-13T01:30:01.596966874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd5658d56-mlhmd,Uid:fba0a9d1-bb75-416a-9f26-d9707b860c3d,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:30:01.878318 systemd-networkd[1482]: cali3541c067069: Link UP Dec 13 01:30:01.879270 systemd-networkd[1482]: cali3541c067069: Gained carrier Dec 13 01:30:01.880185 systemd-networkd[1482]: cali239ff68261f: Link UP Dec 13 01:30:01.881286 systemd-networkd[1482]: cali239ff68261f: Gained carrier Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.725 [INFO][4619] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.742 [INFO][4619] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0 calico-apiserver-6cd5658d56- calico-apiserver fba0a9d1-bb75-416a-9f26-d9707b860c3d 767 0 2024-12-13 01:29:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd5658d56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-c1e94b9ee1 calico-apiserver-6cd5658d56-mlhmd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3541c067069 [] []}} ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-mlhmd" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.742 [INFO][4619] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-mlhmd" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.790 [INFO][4655] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" HandleID="k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.805 [INFO][4655] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" HandleID="k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002eb6f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-c1e94b9ee1", "pod":"calico-apiserver-6cd5658d56-mlhmd", "timestamp":"2024-12-13 01:30:01.790474956 +0000 UTC"}, Hostname:"ci-4081.2.1-a-c1e94b9ee1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.805 [INFO][4655] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.827 [INFO][4655] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.828 [INFO][4655] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-c1e94b9ee1' Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.831 [INFO][4655] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.837 [INFO][4655] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.842 [INFO][4655] ipam/ipam.go 489: Trying affinity for 192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.844 [INFO][4655] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.847 [INFO][4655] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.847 [INFO][4655] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.848 [INFO][4655] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.856 [INFO][4655] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.863 [INFO][4655] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.2/26] block=192.168.92.0/26 handle="k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.863 [INFO][4655] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.2/26] handle="k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.863 [INFO][4655] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:01.912031 containerd[1696]: 2024-12-13 01:30:01.863 [INFO][4655] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.2/26] IPv6=[] ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" HandleID="k8s-pod-network.c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.912612 containerd[1696]: 2024-12-13 01:30:01.865 [INFO][4619] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-mlhmd" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0", GenerateName:"calico-apiserver-6cd5658d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"fba0a9d1-bb75-416a-9f26-d9707b860c3d", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd5658d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"", Pod:"calico-apiserver-6cd5658d56-mlhmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3541c067069", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:01.912612 containerd[1696]: 2024-12-13 01:30:01.865 [INFO][4619] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.2/32] ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-mlhmd" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.912612 containerd[1696]: 2024-12-13 01:30:01.865 [INFO][4619] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3541c067069 ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-mlhmd" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.912612 containerd[1696]: 2024-12-13 01:30:01.879 [INFO][4619] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-mlhmd" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.912612 containerd[1696]: 2024-12-13 01:30:01.881 [INFO][4619] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-mlhmd" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0", GenerateName:"calico-apiserver-6cd5658d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"fba0a9d1-bb75-416a-9f26-d9707b860c3d", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd5658d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d", Pod:"calico-apiserver-6cd5658d56-mlhmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3541c067069", MAC:"ce:f1:36:6d:cd:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:01.912612 containerd[1696]: 2024-12-13 01:30:01.900 [INFO][4619] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-mlhmd" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.680 [INFO][4606] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.698 [INFO][4606] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0 coredns-7db6d8ff4d- kube-system 6371d89c-b939-4616-95e1-8dfb9a85f504 768 0 2024-12-13 01:29:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-c1e94b9ee1 coredns-7db6d8ff4d-9mvzk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali239ff68261f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9mvzk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.698 [INFO][4606] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9mvzk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.753 [INFO][4638] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" HandleID="k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.777 [INFO][4638] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" HandleID="k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000317160), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-c1e94b9ee1", "pod":"coredns-7db6d8ff4d-9mvzk", "timestamp":"2024-12-13 01:30:01.753497276 +0000 UTC"}, Hostname:"ci-4081.2.1-a-c1e94b9ee1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.778 [INFO][4638] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.779 [INFO][4638] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.779 [INFO][4638] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-c1e94b9ee1' Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.784 [INFO][4638] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.792 [INFO][4638] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.803 [INFO][4638] ipam/ipam.go 489: Trying affinity for 192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.805 [INFO][4638] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.808 [INFO][4638] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.808 [INFO][4638] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.810 [INFO][4638] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.818 [INFO][4638] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.827 [INFO][4638] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.1/26] block=192.168.92.0/26 handle="k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.827 [INFO][4638] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.1/26] handle="k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.827 [INFO][4638] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:01.924337 containerd[1696]: 2024-12-13 01:30:01.827 [INFO][4638] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.1/26] IPv6=[] ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" HandleID="k8s-pod-network.735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.924939 containerd[1696]: 2024-12-13 01:30:01.831 [INFO][4606] cni-plugin/k8s.go 386: Populated endpoint ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9mvzk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6371d89c-b939-4616-95e1-8dfb9a85f504", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"", Pod:"coredns-7db6d8ff4d-9mvzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali239ff68261f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:01.924939 containerd[1696]: 2024-12-13 01:30:01.832 [INFO][4606] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.1/32] ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9mvzk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.924939 containerd[1696]: 2024-12-13 01:30:01.832 [INFO][4606] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali239ff68261f ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9mvzk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.924939 containerd[1696]: 2024-12-13 01:30:01.881 [INFO][4606] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9mvzk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.924939 containerd[1696]: 2024-12-13 01:30:01.881 [INFO][4606] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9mvzk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6371d89c-b939-4616-95e1-8dfb9a85f504", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd", Pod:"coredns-7db6d8ff4d-9mvzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali239ff68261f", MAC:"7e:8a:91:a1:3a:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:01.924939 containerd[1696]: 2024-12-13 01:30:01.913 [INFO][4606] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9mvzk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:01.977186 containerd[1696]: time="2024-12-13T01:30:01.977088719Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:01.978732 containerd[1696]: time="2024-12-13T01:30:01.977197439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:01.978732 containerd[1696]: time="2024-12-13T01:30:01.977226239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:01.979278 containerd[1696]: time="2024-12-13T01:30:01.979102959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:01.987266 containerd[1696]: time="2024-12-13T01:30:01.986956399Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:01.987266 containerd[1696]: time="2024-12-13T01:30:01.987019359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:01.987266 containerd[1696]: time="2024-12-13T01:30:01.987047239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:01.987266 containerd[1696]: time="2024-12-13T01:30:01.987132599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:02.010035 systemd[1]: Started cri-containerd-735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd.scope - libcontainer container 735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd. Dec 13 01:30:02.023253 systemd[1]: Started cri-containerd-c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d.scope - libcontainer container c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d. Dec 13 01:30:02.072141 containerd[1696]: time="2024-12-13T01:30:02.071936200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9mvzk,Uid:6371d89c-b939-4616-95e1-8dfb9a85f504,Namespace:kube-system,Attempt:1,} returns sandbox id \"735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd\"" Dec 13 01:30:02.097902 containerd[1696]: time="2024-12-13T01:30:02.097842880Z" level=info msg="CreateContainer within sandbox \"735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:02.118609 containerd[1696]: time="2024-12-13T01:30:02.118531560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd5658d56-mlhmd,Uid:fba0a9d1-bb75-416a-9f26-d9707b860c3d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d\"" Dec 13 01:30:02.120600 containerd[1696]: time="2024-12-13T01:30:02.120573320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:30:02.200209 containerd[1696]: time="2024-12-13T01:30:02.200071282Z" level=info msg="CreateContainer within sandbox \"735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a944c3f59ac8e1a829b3f49b569e0395f7f058d0a4cc60f81d859dc9900afd43\"" Dec 13 01:30:02.202783 containerd[1696]: time="2024-12-13T01:30:02.201927722Z" level=info msg="StartContainer for \"a944c3f59ac8e1a829b3f49b569e0395f7f058d0a4cc60f81d859dc9900afd43\"" Dec 13 01:30:02.224835 systemd[1]: Started cri-containerd-a944c3f59ac8e1a829b3f49b569e0395f7f058d0a4cc60f81d859dc9900afd43.scope - libcontainer container a944c3f59ac8e1a829b3f49b569e0395f7f058d0a4cc60f81d859dc9900afd43. Dec 13 01:30:02.259830 containerd[1696]: time="2024-12-13T01:30:02.259470082Z" level=info msg="StartContainer for \"a944c3f59ac8e1a829b3f49b569e0395f7f058d0a4cc60f81d859dc9900afd43\" returns successfully" Dec 13 01:30:02.476856 containerd[1696]: time="2024-12-13T01:30:02.476336725Z" level=info msg="StopPodSandbox for \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\"" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.545 [INFO][4845] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.547 [INFO][4845] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" iface="eth0" netns="/var/run/netns/cni-0026a654-8b0f-b2f4-2575-774684330b83" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.547 [INFO][4845] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" iface="eth0" netns="/var/run/netns/cni-0026a654-8b0f-b2f4-2575-774684330b83" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.548 [INFO][4845] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" iface="eth0" netns="/var/run/netns/cni-0026a654-8b0f-b2f4-2575-774684330b83" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.548 [INFO][4845] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.548 [INFO][4845] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.568 [INFO][4856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.569 [INFO][4856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.569 [INFO][4856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.578 [WARNING][4856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.578 [INFO][4856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.583 [INFO][4856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:02.588219 containerd[1696]: 2024-12-13 01:30:02.586 [INFO][4845] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:02.590601 containerd[1696]: time="2024-12-13T01:30:02.588920487Z" level=info msg="TearDown network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\" successfully" Dec 13 01:30:02.590601 containerd[1696]: time="2024-12-13T01:30:02.588952207Z" level=info msg="StopPodSandbox for \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\" returns successfully" Dec 13 01:30:02.592189 containerd[1696]: time="2024-12-13T01:30:02.590839087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75778b7f7b-qcllh,Uid:a04b6ce3-5f74-4242-b4cb-d590307c3dbb,Namespace:calico-system,Attempt:1,}" Dec 13 01:30:02.591834 systemd[1]: run-netns-cni\x2d0026a654\x2d8b0f\x2db2f4\x2d2575\x2d774684330b83.mount: Deactivated successfully. Dec 13 01:30:02.720040 kubelet[3180]: I1213 01:30:02.719951 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9mvzk" podStartSLOduration=31.719930088 podStartE2EDuration="31.719930088s" podCreationTimestamp="2024-12-13 01:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:02.719145568 +0000 UTC m=+46.343918687" watchObservedRunningTime="2024-12-13 01:30:02.719930088 +0000 UTC m=+46.344703207" Dec 13 01:30:02.773042 systemd-networkd[1482]: calid4c1c8b2ecf: Link UP Dec 13 01:30:02.775711 systemd-networkd[1482]: calid4c1c8b2ecf: Gained carrier Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.651 [INFO][4862] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.664 [INFO][4862] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0 calico-kube-controllers-75778b7f7b- calico-system a04b6ce3-5f74-4242-b4cb-d590307c3dbb 784 0 2024-12-13 01:29:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:75778b7f7b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-a-c1e94b9ee1 calico-kube-controllers-75778b7f7b-qcllh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid4c1c8b2ecf [] []}} ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Namespace="calico-system" Pod="calico-kube-controllers-75778b7f7b-qcllh" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.664 [INFO][4862] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Namespace="calico-system" Pod="calico-kube-controllers-75778b7f7b-qcllh" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.691 [INFO][4874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" HandleID="k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.709 [INFO][4874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" HandleID="k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028d4b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-c1e94b9ee1", "pod":"calico-kube-controllers-75778b7f7b-qcllh", "timestamp":"2024-12-13 01:30:02.691569848 +0000 UTC"}, Hostname:"ci-4081.2.1-a-c1e94b9ee1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.709 [INFO][4874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.709 [INFO][4874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.709 [INFO][4874] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-c1e94b9ee1' Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.711 [INFO][4874] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.718 [INFO][4874] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.730 [INFO][4874] ipam/ipam.go 489: Trying affinity for 192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.737 [INFO][4874] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.743 [INFO][4874] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.743 [INFO][4874] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.748 [INFO][4874] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.756 [INFO][4874] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.767 [INFO][4874] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.3/26] block=192.168.92.0/26 handle="k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.768 [INFO][4874] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.3/26] handle="k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.768 [INFO][4874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:02.795315 containerd[1696]: 2024-12-13 01:30:02.768 [INFO][4874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.3/26] IPv6=[] ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" HandleID="k8s-pod-network.13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.795943 containerd[1696]: 2024-12-13 01:30:02.769 [INFO][4862] cni-plugin/k8s.go 386: Populated endpoint ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Namespace="calico-system" Pod="calico-kube-controllers-75778b7f7b-qcllh" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0", GenerateName:"calico-kube-controllers-75778b7f7b-", Namespace:"calico-system", SelfLink:"", UID:"a04b6ce3-5f74-4242-b4cb-d590307c3dbb", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75778b7f7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"", Pod:"calico-kube-controllers-75778b7f7b-qcllh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid4c1c8b2ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:02.795943 containerd[1696]: 2024-12-13 01:30:02.769 [INFO][4862] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.3/32] ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Namespace="calico-system" Pod="calico-kube-controllers-75778b7f7b-qcllh" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.795943 containerd[1696]: 2024-12-13 01:30:02.769 [INFO][4862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4c1c8b2ecf ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Namespace="calico-system" Pod="calico-kube-controllers-75778b7f7b-qcllh" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.795943 containerd[1696]: 2024-12-13 01:30:02.775 [INFO][4862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Namespace="calico-system" Pod="calico-kube-controllers-75778b7f7b-qcllh" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.795943 containerd[1696]: 2024-12-13 01:30:02.777 [INFO][4862] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Namespace="calico-system" Pod="calico-kube-controllers-75778b7f7b-qcllh" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0", GenerateName:"calico-kube-controllers-75778b7f7b-", Namespace:"calico-system", SelfLink:"", UID:"a04b6ce3-5f74-4242-b4cb-d590307c3dbb", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75778b7f7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff", Pod:"calico-kube-controllers-75778b7f7b-qcllh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid4c1c8b2ecf", MAC:"02:31:c6:69:18:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:02.795943 containerd[1696]: 2024-12-13 01:30:02.793 [INFO][4862] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff" Namespace="calico-system" Pod="calico-kube-controllers-75778b7f7b-qcllh" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:02.819014 containerd[1696]: time="2024-12-13T01:30:02.818903850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:02.819014 containerd[1696]: time="2024-12-13T01:30:02.818975970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:02.819014 containerd[1696]: time="2024-12-13T01:30:02.818990810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:02.819349 containerd[1696]: time="2024-12-13T01:30:02.819079690Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:02.839873 systemd[1]: Started cri-containerd-13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff.scope - libcontainer container 13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff. Dec 13 01:30:02.876362 containerd[1696]: time="2024-12-13T01:30:02.876311970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-75778b7f7b-qcllh,Uid:a04b6ce3-5f74-4242-b4cb-d590307c3dbb,Namespace:calico-system,Attempt:1,} returns sandbox id \"13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff\"" Dec 13 01:30:03.269872 systemd-networkd[1482]: cali3541c067069: Gained IPv6LL Dec 13 01:30:03.333958 systemd-networkd[1482]: cali239ff68261f: Gained IPv6LL Dec 13 01:30:04.230796 systemd-networkd[1482]: calid4c1c8b2ecf: Gained IPv6LL Dec 13 01:30:04.462522 containerd[1696]: time="2024-12-13T01:30:04.462224551Z" level=info msg="StopPodSandbox for \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\"" Dec 13 01:30:04.464790 containerd[1696]: time="2024-12-13T01:30:04.464545991Z" level=info msg="StopPodSandbox for \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\"" Dec 13 01:30:04.465625 containerd[1696]: time="2024-12-13T01:30:04.465499871Z" level=info msg="StopPodSandbox for \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\"" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.598 [INFO][5005] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.598 [INFO][5005] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" iface="eth0" netns="/var/run/netns/cni-686a2ef8-4e5c-2eeb-50fe-426c030f5ed3" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.600 [INFO][5005] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" iface="eth0" netns="/var/run/netns/cni-686a2ef8-4e5c-2eeb-50fe-426c030f5ed3" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.602 [INFO][5005] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" iface="eth0" netns="/var/run/netns/cni-686a2ef8-4e5c-2eeb-50fe-426c030f5ed3" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.602 [INFO][5005] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.602 [INFO][5005] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.651 [INFO][5040] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.651 [INFO][5040] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.651 [INFO][5040] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.667 [WARNING][5040] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.667 [INFO][5040] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.671 [INFO][5040] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:04.679798 containerd[1696]: 2024-12-13 01:30:04.674 [INFO][5005] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:04.683542 systemd[1]: run-netns-cni\x2d686a2ef8\x2d4e5c\x2d2eeb\x2d50fe\x2d426c030f5ed3.mount: Deactivated successfully. Dec 13 01:30:04.687761 containerd[1696]: time="2024-12-13T01:30:04.684602514Z" level=info msg="TearDown network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\" successfully" Dec 13 01:30:04.687761 containerd[1696]: time="2024-12-13T01:30:04.684644554Z" level=info msg="StopPodSandbox for \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\" returns successfully" Dec 13 01:30:04.687761 containerd[1696]: time="2024-12-13T01:30:04.686486954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd5658d56-plblc,Uid:7833e1a8-e541-4910-80f7-3565f077e0a5,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.569 [INFO][5001] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.570 [INFO][5001] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" iface="eth0" netns="/var/run/netns/cni-ec84aad5-1c50-5b2e-53ae-cc80f3a61a73" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.570 [INFO][5001] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" iface="eth0" netns="/var/run/netns/cni-ec84aad5-1c50-5b2e-53ae-cc80f3a61a73" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.572 [INFO][5001] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" iface="eth0" netns="/var/run/netns/cni-ec84aad5-1c50-5b2e-53ae-cc80f3a61a73" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.572 [INFO][5001] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.572 [INFO][5001] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.659 [INFO][5029] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.660 [INFO][5029] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.670 [INFO][5029] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.692 [WARNING][5029] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.692 [INFO][5029] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.695 [INFO][5029] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:04.704594 containerd[1696]: 2024-12-13 01:30:04.700 [INFO][5001] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:04.709039 containerd[1696]: time="2024-12-13T01:30:04.708912714Z" level=info msg="TearDown network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\" successfully" Dec 13 01:30:04.709039 containerd[1696]: time="2024-12-13T01:30:04.708947274Z" level=info msg="StopPodSandbox for \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\" returns successfully" Dec 13 01:30:04.709591 systemd[1]: run-netns-cni\x2dec84aad5\x2d1c50\x2d5b2e\x2d53ae\x2dcc80f3a61a73.mount: Deactivated successfully. Dec 13 01:30:04.710228 containerd[1696]: time="2024-12-13T01:30:04.710189194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7bfm,Uid:a57b7da1-6e0a-4e25-9020-f599a3a71d0b,Namespace:calico-system,Attempt:1,}" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.590 [INFO][5006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.592 [INFO][5006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" iface="eth0" netns="/var/run/netns/cni-af9ca92e-5eda-4123-3c76-831fedfe2238" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.592 [INFO][5006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" iface="eth0" netns="/var/run/netns/cni-af9ca92e-5eda-4123-3c76-831fedfe2238" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.593 [INFO][5006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" iface="eth0" netns="/var/run/netns/cni-af9ca92e-5eda-4123-3c76-831fedfe2238" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.593 [INFO][5006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.593 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.697 [INFO][5035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.699 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.700 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.723 [WARNING][5035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.724 [INFO][5035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.727 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:04.742110 containerd[1696]: 2024-12-13 01:30:04.729 [INFO][5006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:04.746063 containerd[1696]: time="2024-12-13T01:30:04.743868875Z" level=info msg="TearDown network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\" successfully" Dec 13 01:30:04.746063 containerd[1696]: time="2024-12-13T01:30:04.743912475Z" level=info msg="StopPodSandbox for \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\" returns successfully" Dec 13 01:30:04.744500 systemd[1]: run-netns-cni\x2daf9ca92e\x2d5eda\x2d4123\x2d3c76\x2d831fedfe2238.mount: Deactivated successfully. Dec 13 01:30:04.751269 containerd[1696]: time="2024-12-13T01:30:04.751226915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qlsjk,Uid:f2830d02-b780-4c40-8169-2cb412cf67f7,Namespace:kube-system,Attempt:1,}" Dec 13 01:30:05.007790 systemd-networkd[1482]: cali2cc7db0247e: Link UP Dec 13 01:30:05.009202 systemd-networkd[1482]: cali2cc7db0247e: Gained carrier Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.826 [INFO][5064] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.849 [INFO][5064] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0 calico-apiserver-6cd5658d56- calico-apiserver 7833e1a8-e541-4910-80f7-3565f077e0a5 805 0 2024-12-13 01:29:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6cd5658d56 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-c1e94b9ee1 calico-apiserver-6cd5658d56-plblc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2cc7db0247e [] []}} ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-plblc" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.849 [INFO][5064] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-plblc" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.916 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" HandleID="k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.935 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" HandleID="k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003178e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-c1e94b9ee1", "pod":"calico-apiserver-6cd5658d56-plblc", "timestamp":"2024-12-13 01:30:04.916365837 +0000 UTC"}, Hostname:"ci-4081.2.1-a-c1e94b9ee1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.937 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.938 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.939 [INFO][5098] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-c1e94b9ee1' Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.942 [INFO][5098] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.953 [INFO][5098] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.962 [INFO][5098] ipam/ipam.go 489: Trying affinity for 192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.965 [INFO][5098] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.972 [INFO][5098] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.972 [INFO][5098] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.975 [INFO][5098] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429 Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.983 [INFO][5098] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.997 [INFO][5098] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.4/26] block=192.168.92.0/26 handle="k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.997 [INFO][5098] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.4/26] handle="k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.997 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:05.047740 containerd[1696]: 2024-12-13 01:30:04.997 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.4/26] IPv6=[] ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" HandleID="k8s-pod-network.d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:05.048826 containerd[1696]: 2024-12-13 01:30:05.000 [INFO][5064] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-plblc" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0", GenerateName:"calico-apiserver-6cd5658d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"7833e1a8-e541-4910-80f7-3565f077e0a5", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd5658d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"", Pod:"calico-apiserver-6cd5658d56-plblc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc7db0247e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:05.048826 containerd[1696]: 2024-12-13 01:30:05.001 [INFO][5064] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.4/32] ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-plblc" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:05.048826 containerd[1696]: 2024-12-13 01:30:05.001 [INFO][5064] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cc7db0247e ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-plblc" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:05.048826 containerd[1696]: 2024-12-13 01:30:05.010 [INFO][5064] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-plblc" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:05.048826 containerd[1696]: 2024-12-13 01:30:05.019 [INFO][5064] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-plblc" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0", GenerateName:"calico-apiserver-6cd5658d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"7833e1a8-e541-4910-80f7-3565f077e0a5", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd5658d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429", Pod:"calico-apiserver-6cd5658d56-plblc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc7db0247e", MAC:"56:1d:32:d9:f9:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:05.048826 containerd[1696]: 2024-12-13 01:30:05.043 [INFO][5064] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429" Namespace="calico-apiserver" Pod="calico-apiserver-6cd5658d56-plblc" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:05.081011 containerd[1696]: time="2024-12-13T01:30:05.079499279Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:05.081011 containerd[1696]: time="2024-12-13T01:30:05.079640559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:05.081011 containerd[1696]: time="2024-12-13T01:30:05.079707359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:05.081011 containerd[1696]: time="2024-12-13T01:30:05.079836159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:05.109899 systemd[1]: Started cri-containerd-d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429.scope - libcontainer container d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429. Dec 13 01:30:05.119420 systemd-networkd[1482]: cali09ad80e31aa: Link UP Dec 13 01:30:05.120763 systemd-networkd[1482]: cali09ad80e31aa: Gained carrier Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:04.913 [INFO][5076] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:04.941 [INFO][5076] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0 coredns-7db6d8ff4d- kube-system f2830d02-b780-4c40-8169-2cb412cf67f7 804 0 2024-12-13 01:29:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-c1e94b9ee1 coredns-7db6d8ff4d-qlsjk eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali09ad80e31aa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qlsjk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:04.941 [INFO][5076] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qlsjk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.001 [INFO][5110] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" HandleID="k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.039 [INFO][5110] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" HandleID="k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316960), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-c1e94b9ee1", "pod":"coredns-7db6d8ff4d-qlsjk", "timestamp":"2024-12-13 01:30:05.001770358 +0000 UTC"}, Hostname:"ci-4081.2.1-a-c1e94b9ee1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.040 [INFO][5110] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.040 [INFO][5110] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.040 [INFO][5110] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-c1e94b9ee1' Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.050 [INFO][5110] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.057 [INFO][5110] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.074 [INFO][5110] ipam/ipam.go 489: Trying affinity for 192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.077 [INFO][5110] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.080 [INFO][5110] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.080 [INFO][5110] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.083 [INFO][5110] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.090 [INFO][5110] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.103 [INFO][5110] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.5/26] block=192.168.92.0/26 handle="k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.104 [INFO][5110] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.5/26] handle="k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.104 [INFO][5110] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:05.154932 containerd[1696]: 2024-12-13 01:30:05.104 [INFO][5110] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.5/26] IPv6=[] ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" HandleID="k8s-pod-network.f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:05.155669 containerd[1696]: 2024-12-13 01:30:05.109 [INFO][5076] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qlsjk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f2830d02-b780-4c40-8169-2cb412cf67f7", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"", Pod:"coredns-7db6d8ff4d-qlsjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09ad80e31aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:05.155669 containerd[1696]: 2024-12-13 01:30:05.111 [INFO][5076] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.5/32] ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qlsjk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:05.155669 containerd[1696]: 2024-12-13 01:30:05.111 [INFO][5076] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali09ad80e31aa ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qlsjk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:05.155669 containerd[1696]: 2024-12-13 01:30:05.122 [INFO][5076] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qlsjk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:05.155669 containerd[1696]: 2024-12-13 01:30:05.129 [INFO][5076] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qlsjk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f2830d02-b780-4c40-8169-2cb412cf67f7", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f", Pod:"coredns-7db6d8ff4d-qlsjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09ad80e31aa", MAC:"06:7b:ae:d2:61:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:05.155669 containerd[1696]: 2024-12-13 01:30:05.149 [INFO][5076] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qlsjk" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:05.194865 systemd-networkd[1482]: calicd1287a3e7b: Link UP Dec 13 01:30:05.197414 systemd-networkd[1482]: calicd1287a3e7b: Gained carrier Dec 13 01:30:05.206332 containerd[1696]: time="2024-12-13T01:30:05.206294521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6cd5658d56-plblc,Uid:7833e1a8-e541-4910-80f7-3565f077e0a5,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429\"" Dec 13 01:30:05.220910 containerd[1696]: time="2024-12-13T01:30:05.220252401Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:05.220910 containerd[1696]: time="2024-12-13T01:30:05.220315441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:05.220910 containerd[1696]: time="2024-12-13T01:30:05.220331401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:05.220910 containerd[1696]: time="2024-12-13T01:30:05.220409441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:04.915 [INFO][5086] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:04.950 [INFO][5086] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0 csi-node-driver- calico-system a57b7da1-6e0a-4e25-9020-f599a3a71d0b 803 0 2024-12-13 01:29:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-a-c1e94b9ee1 csi-node-driver-z7bfm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicd1287a3e7b [] []}} ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Namespace="calico-system" Pod="csi-node-driver-z7bfm" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:04.951 [INFO][5086] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Namespace="calico-system" Pod="csi-node-driver-z7bfm" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.052 [INFO][5114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" HandleID="k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.074 [INFO][5114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" HandleID="k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ebd20), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-c1e94b9ee1", "pod":"csi-node-driver-z7bfm", "timestamp":"2024-12-13 01:30:05.052833119 +0000 UTC"}, Hostname:"ci-4081.2.1-a-c1e94b9ee1", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.074 [INFO][5114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.106 [INFO][5114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.106 [INFO][5114] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-c1e94b9ee1' Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.115 [INFO][5114] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.126 [INFO][5114] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.139 [INFO][5114] ipam/ipam.go 489: Trying affinity for 192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.143 [INFO][5114] ipam/ipam.go 155: Attempting to load block cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.151 [INFO][5114] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.92.0/26 host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.151 [INFO][5114] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.92.0/26 handle="k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.156 [INFO][5114] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.165 [INFO][5114] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.92.0/26 handle="k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.183 [INFO][5114] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.92.6/26] block=192.168.92.0/26 handle="k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.183 [INFO][5114] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.92.6/26] handle="k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" host="ci-4081.2.1-a-c1e94b9ee1" Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.183 [INFO][5114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:05.224956 containerd[1696]: 2024-12-13 01:30:05.183 [INFO][5114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.92.6/26] IPv6=[] ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" HandleID="k8s-pod-network.6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:05.225511 containerd[1696]: 2024-12-13 01:30:05.188 [INFO][5086] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Namespace="calico-system" Pod="csi-node-driver-z7bfm" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a57b7da1-6e0a-4e25-9020-f599a3a71d0b", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"", Pod:"csi-node-driver-z7bfm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd1287a3e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:05.225511 containerd[1696]: 2024-12-13 01:30:05.188 [INFO][5086] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.92.6/32] ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Namespace="calico-system" Pod="csi-node-driver-z7bfm" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:05.225511 containerd[1696]: 2024-12-13 01:30:05.188 [INFO][5086] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicd1287a3e7b ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Namespace="calico-system" Pod="csi-node-driver-z7bfm" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:05.225511 containerd[1696]: 2024-12-13 01:30:05.197 [INFO][5086] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Namespace="calico-system" Pod="csi-node-driver-z7bfm" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:05.225511 containerd[1696]: 2024-12-13 01:30:05.198 [INFO][5086] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Namespace="calico-system" Pod="csi-node-driver-z7bfm" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a57b7da1-6e0a-4e25-9020-f599a3a71d0b", ResourceVersion:"803", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b", Pod:"csi-node-driver-z7bfm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd1287a3e7b", MAC:"3e:f6:2e:cc:17:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:05.225511 containerd[1696]: 2024-12-13 01:30:05.221 [INFO][5086] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b" Namespace="calico-system" Pod="csi-node-driver-z7bfm" WorkloadEndpoint="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:05.248889 systemd[1]: Started cri-containerd-f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f.scope - libcontainer container f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f. Dec 13 01:30:05.294485 containerd[1696]: time="2024-12-13T01:30:05.293619562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:30:05.294485 containerd[1696]: time="2024-12-13T01:30:05.293784722Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:30:05.294485 containerd[1696]: time="2024-12-13T01:30:05.293819002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:05.294485 containerd[1696]: time="2024-12-13T01:30:05.294226242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:30:05.332348 systemd[1]: Started cri-containerd-6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b.scope - libcontainer container 6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b. Dec 13 01:30:05.343695 containerd[1696]: time="2024-12-13T01:30:05.342974403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qlsjk,Uid:f2830d02-b780-4c40-8169-2cb412cf67f7,Namespace:kube-system,Attempt:1,} returns sandbox id \"f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f\"" Dec 13 01:30:05.357335 containerd[1696]: time="2024-12-13T01:30:05.357204763Z" level=info msg="CreateContainer within sandbox \"f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:30:05.397948 containerd[1696]: time="2024-12-13T01:30:05.397733004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z7bfm,Uid:a57b7da1-6e0a-4e25-9020-f599a3a71d0b,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b\"" Dec 13 01:30:05.426344 containerd[1696]: time="2024-12-13T01:30:05.426202124Z" level=info msg="CreateContainer within sandbox \"f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8750bb26856a6955136a1015f53c4a1b0fcbf1f537a431f9843957356b77c86e\"" Dec 13 01:30:05.428269 containerd[1696]: time="2024-12-13T01:30:05.427453404Z" level=info msg="StartContainer for \"8750bb26856a6955136a1015f53c4a1b0fcbf1f537a431f9843957356b77c86e\"" Dec 13 01:30:05.463025 systemd[1]: Started cri-containerd-8750bb26856a6955136a1015f53c4a1b0fcbf1f537a431f9843957356b77c86e.scope - libcontainer container 8750bb26856a6955136a1015f53c4a1b0fcbf1f537a431f9843957356b77c86e. Dec 13 01:30:05.506473 containerd[1696]: time="2024-12-13T01:30:05.506419405Z" level=info msg="StartContainer for \"8750bb26856a6955136a1015f53c4a1b0fcbf1f537a431f9843957356b77c86e\" returns successfully" Dec 13 01:30:05.749078 kubelet[3180]: I1213 01:30:05.748593 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qlsjk" podStartSLOduration=34.748563298 podStartE2EDuration="34.748563298s" podCreationTimestamp="2024-12-13 01:29:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:30:05.748153338 +0000 UTC m=+49.372926457" watchObservedRunningTime="2024-12-13 01:30:05.748563298 +0000 UTC m=+49.373336417" Dec 13 01:30:05.763561 containerd[1696]: time="2024-12-13T01:30:05.763511296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:05.766909 containerd[1696]: time="2024-12-13T01:30:05.766870056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Dec 13 01:30:05.775153 containerd[1696]: time="2024-12-13T01:30:05.774080535Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:05.781090 containerd[1696]: time="2024-12-13T01:30:05.781037534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:05.782155 containerd[1696]: time="2024-12-13T01:30:05.781528974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 3.660840414s" Dec 13 01:30:05.782155 containerd[1696]: time="2024-12-13T01:30:05.781567814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:30:05.784399 containerd[1696]: time="2024-12-13T01:30:05.784204733Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:30:05.787980 containerd[1696]: time="2024-12-13T01:30:05.787341133Z" level=info msg="CreateContainer within sandbox \"c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:30:05.892871 containerd[1696]: time="2024-12-13T01:30:05.892821559Z" level=info msg="CreateContainer within sandbox \"c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"572eb659fb10bbabfa91626a858842852ee00a16f664ef19a61e21b5251d2336\"" Dec 13 01:30:05.894720 containerd[1696]: time="2024-12-13T01:30:05.894471719Z" level=info msg="StartContainer for \"572eb659fb10bbabfa91626a858842852ee00a16f664ef19a61e21b5251d2336\"" Dec 13 01:30:05.932855 systemd[1]: Started cri-containerd-572eb659fb10bbabfa91626a858842852ee00a16f664ef19a61e21b5251d2336.scope - libcontainer container 572eb659fb10bbabfa91626a858842852ee00a16f664ef19a61e21b5251d2336. Dec 13 01:30:05.971465 containerd[1696]: time="2024-12-13T01:30:05.971414189Z" level=info msg="StartContainer for \"572eb659fb10bbabfa91626a858842852ee00a16f664ef19a61e21b5251d2336\" returns successfully" Dec 13 01:30:06.114325 kubelet[3180]: I1213 01:30:06.113859 3180 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:06.341820 systemd-networkd[1482]: cali2cc7db0247e: Gained IPv6LL Dec 13 01:30:06.405898 systemd-networkd[1482]: cali09ad80e31aa: Gained IPv6LL Dec 13 01:30:06.752167 kubelet[3180]: I1213 01:30:06.752007 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cd5658d56-mlhmd" podStartSLOduration=25.088196124 podStartE2EDuration="28.751989377s" podCreationTimestamp="2024-12-13 01:29:38 +0000 UTC" firstStartedPulling="2024-12-13 01:30:02.1200118 +0000 UTC m=+45.744784919" lastFinishedPulling="2024-12-13 01:30:05.783805053 +0000 UTC m=+49.408578172" observedRunningTime="2024-12-13 01:30:06.751469738 +0000 UTC m=+50.376242857" watchObservedRunningTime="2024-12-13 01:30:06.751989377 +0000 UTC m=+50.376762496" Dec 13 01:30:06.981933 systemd-networkd[1482]: calicd1287a3e7b: Gained IPv6LL Dec 13 01:30:07.104760 kernel: bpftool[5441]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:30:07.371515 systemd-networkd[1482]: vxlan.calico: Link UP Dec 13 01:30:07.371529 systemd-networkd[1482]: vxlan.calico: Gained carrier Dec 13 01:30:08.565233 containerd[1696]: time="2024-12-13T01:30:08.565171088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:08.568777 containerd[1696]: time="2024-12-13T01:30:08.568706088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Dec 13 01:30:08.573554 containerd[1696]: time="2024-12-13T01:30:08.573480207Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:08.580494 containerd[1696]: time="2024-12-13T01:30:08.580411806Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:08.581563 containerd[1696]: time="2024-12-13T01:30:08.581132686Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 2.796888833s" Dec 13 01:30:08.581563 containerd[1696]: time="2024-12-13T01:30:08.581173246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 01:30:08.582889 containerd[1696]: time="2024-12-13T01:30:08.582844006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:30:08.597262 containerd[1696]: time="2024-12-13T01:30:08.597147205Z" level=info msg="CreateContainer within sandbox \"13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:30:08.648980 containerd[1696]: time="2024-12-13T01:30:08.648933119Z" level=info msg="CreateContainer within sandbox \"13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0cef6aedfc416870b02ba9d93ba1dd3fcedb4142526bbd2c5f339a6105f9dbdc\"" Dec 13 01:30:08.651033 containerd[1696]: time="2024-12-13T01:30:08.650433438Z" level=info msg="StartContainer for \"0cef6aedfc416870b02ba9d93ba1dd3fcedb4142526bbd2c5f339a6105f9dbdc\"" Dec 13 01:30:08.683897 systemd[1]: Started cri-containerd-0cef6aedfc416870b02ba9d93ba1dd3fcedb4142526bbd2c5f339a6105f9dbdc.scope - libcontainer container 0cef6aedfc416870b02ba9d93ba1dd3fcedb4142526bbd2c5f339a6105f9dbdc. Dec 13 01:30:08.719824 containerd[1696]: time="2024-12-13T01:30:08.719328470Z" level=info msg="StartContainer for \"0cef6aedfc416870b02ba9d93ba1dd3fcedb4142526bbd2c5f339a6105f9dbdc\" returns successfully" Dec 13 01:30:08.832872 kubelet[3180]: I1213 01:30:08.832724 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-75778b7f7b-qcllh" podStartSLOduration=24.128531861 podStartE2EDuration="29.832704857s" podCreationTimestamp="2024-12-13 01:29:39 +0000 UTC" firstStartedPulling="2024-12-13 01:30:02.87813985 +0000 UTC m=+46.502912969" lastFinishedPulling="2024-12-13 01:30:08.582312886 +0000 UTC m=+52.207085965" observedRunningTime="2024-12-13 01:30:08.771026464 +0000 UTC m=+52.395799663" watchObservedRunningTime="2024-12-13 01:30:08.832704857 +0000 UTC m=+52.457477976" Dec 13 01:30:08.934943 containerd[1696]: time="2024-12-13T01:30:08.934883206Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:08.939316 containerd[1696]: time="2024-12-13T01:30:08.939264165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:30:08.943561 containerd[1696]: time="2024-12-13T01:30:08.943500685Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 360.588079ms" Dec 13 01:30:08.943561 containerd[1696]: time="2024-12-13T01:30:08.943553765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 01:30:08.945926 containerd[1696]: time="2024-12-13T01:30:08.945684124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:30:08.948461 containerd[1696]: time="2024-12-13T01:30:08.947058244Z" level=info msg="CreateContainer within sandbox \"d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:30:09.002910 containerd[1696]: time="2024-12-13T01:30:09.002842678Z" level=info msg="CreateContainer within sandbox \"d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d46e7b22d3bb1ce98ac640ef20e68c6574ee409c4073a1f00d65ceb29f3302d\"" Dec 13 01:30:09.004310 containerd[1696]: time="2024-12-13T01:30:09.004265878Z" level=info msg="StartContainer for \"6d46e7b22d3bb1ce98ac640ef20e68c6574ee409c4073a1f00d65ceb29f3302d\"" Dec 13 01:30:09.030065 systemd-networkd[1482]: vxlan.calico: Gained IPv6LL Dec 13 01:30:09.046053 systemd[1]: Started cri-containerd-6d46e7b22d3bb1ce98ac640ef20e68c6574ee409c4073a1f00d65ceb29f3302d.scope - libcontainer container 6d46e7b22d3bb1ce98ac640ef20e68c6574ee409c4073a1f00d65ceb29f3302d. Dec 13 01:30:09.094974 containerd[1696]: time="2024-12-13T01:30:09.094714027Z" level=info msg="StartContainer for \"6d46e7b22d3bb1ce98ac640ef20e68c6574ee409c4073a1f00d65ceb29f3302d\" returns successfully" Dec 13 01:30:10.494889 containerd[1696]: time="2024-12-13T01:30:10.494304546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:10.497379 containerd[1696]: time="2024-12-13T01:30:10.497327785Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Dec 13 01:30:10.502947 containerd[1696]: time="2024-12-13T01:30:10.502862825Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:10.513857 containerd[1696]: time="2024-12-13T01:30:10.513770423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:10.514643 containerd[1696]: time="2024-12-13T01:30:10.514500383Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 1.568775499s" Dec 13 01:30:10.514643 containerd[1696]: time="2024-12-13T01:30:10.514537703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 01:30:10.518555 containerd[1696]: time="2024-12-13T01:30:10.518270023Z" level=info msg="CreateContainer within sandbox \"6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:30:10.555654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681831712.mount: Deactivated successfully. Dec 13 01:30:10.573087 containerd[1696]: time="2024-12-13T01:30:10.572983856Z" level=info msg="CreateContainer within sandbox \"6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4819452eb9f5c65cc71bbc3a99d05900bd3104c450eb79041e3ed302a1535438\"" Dec 13 01:30:10.573688 containerd[1696]: time="2024-12-13T01:30:10.573556416Z" level=info msg="StartContainer for \"4819452eb9f5c65cc71bbc3a99d05900bd3104c450eb79041e3ed302a1535438\"" Dec 13 01:30:10.616870 systemd[1]: Started cri-containerd-4819452eb9f5c65cc71bbc3a99d05900bd3104c450eb79041e3ed302a1535438.scope - libcontainer container 4819452eb9f5c65cc71bbc3a99d05900bd3104c450eb79041e3ed302a1535438. Dec 13 01:30:10.682896 containerd[1696]: time="2024-12-13T01:30:10.682838524Z" level=info msg="StartContainer for \"4819452eb9f5c65cc71bbc3a99d05900bd3104c450eb79041e3ed302a1535438\" returns successfully" Dec 13 01:30:10.684561 containerd[1696]: time="2024-12-13T01:30:10.684449684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:30:10.759111 kubelet[3180]: I1213 01:30:10.758950 3180 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:14.689485 containerd[1696]: time="2024-12-13T01:30:14.688864434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:14.692455 containerd[1696]: time="2024-12-13T01:30:14.692427354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Dec 13 01:30:14.698338 containerd[1696]: time="2024-12-13T01:30:14.698299033Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:14.704545 containerd[1696]: time="2024-12-13T01:30:14.704499873Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:30:14.705523 containerd[1696]: time="2024-12-13T01:30:14.705385753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 4.020881549s" Dec 13 01:30:14.705523 containerd[1696]: time="2024-12-13T01:30:14.705419313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 01:30:14.708608 containerd[1696]: time="2024-12-13T01:30:14.708562753Z" level=info msg="CreateContainer within sandbox \"6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:30:14.759688 containerd[1696]: time="2024-12-13T01:30:14.759624752Z" level=info msg="CreateContainer within sandbox \"6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b1e1b8d0653990a51f3b957e81245253715f7d4347584ef75bd15083a552d2bc\"" Dec 13 01:30:14.761911 containerd[1696]: time="2024-12-13T01:30:14.760234152Z" level=info msg="StartContainer for \"b1e1b8d0653990a51f3b957e81245253715f7d4347584ef75bd15083a552d2bc\"" Dec 13 01:30:14.792954 systemd[1]: run-containerd-runc-k8s.io-b1e1b8d0653990a51f3b957e81245253715f7d4347584ef75bd15083a552d2bc-runc.yvTPM8.mount: Deactivated successfully. Dec 13 01:30:14.799822 systemd[1]: Started cri-containerd-b1e1b8d0653990a51f3b957e81245253715f7d4347584ef75bd15083a552d2bc.scope - libcontainer container b1e1b8d0653990a51f3b957e81245253715f7d4347584ef75bd15083a552d2bc. Dec 13 01:30:14.830962 containerd[1696]: time="2024-12-13T01:30:14.830910271Z" level=info msg="StartContainer for \"b1e1b8d0653990a51f3b957e81245253715f7d4347584ef75bd15083a552d2bc\" returns successfully" Dec 13 01:30:15.587948 kubelet[3180]: I1213 01:30:15.587908 3180 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:30:15.591048 kubelet[3180]: I1213 01:30:15.591008 3180 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:30:15.788086 kubelet[3180]: I1213 01:30:15.787780 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-z7bfm" podStartSLOduration=28.48238146 podStartE2EDuration="37.787763809s" podCreationTimestamp="2024-12-13 01:29:38 +0000 UTC" firstStartedPulling="2024-12-13 01:30:05.400850484 +0000 UTC m=+49.025623603" lastFinishedPulling="2024-12-13 01:30:14.706232873 +0000 UTC m=+58.331005952" observedRunningTime="2024-12-13 01:30:15.787530089 +0000 UTC m=+59.412303208" watchObservedRunningTime="2024-12-13 01:30:15.787763809 +0000 UTC m=+59.412536928" Dec 13 01:30:15.789422 kubelet[3180]: I1213 01:30:15.789263 3180 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6cd5658d56-plblc" podStartSLOduration=34.054718726 podStartE2EDuration="37.789238729s" podCreationTimestamp="2024-12-13 01:29:38 +0000 UTC" firstStartedPulling="2024-12-13 01:30:05.210308801 +0000 UTC m=+48.835081920" lastFinishedPulling="2024-12-13 01:30:08.944828844 +0000 UTC m=+52.569601923" observedRunningTime="2024-12-13 01:30:09.771955589 +0000 UTC m=+53.396728708" watchObservedRunningTime="2024-12-13 01:30:15.789238729 +0000 UTC m=+59.414011848" Dec 13 01:30:16.479582 containerd[1696]: time="2024-12-13T01:30:16.479526873Z" level=info msg="StopPodSandbox for \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\"" Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.518 [WARNING][5723] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a57b7da1-6e0a-4e25-9020-f599a3a71d0b", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b", Pod:"csi-node-driver-z7bfm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd1287a3e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.518 [INFO][5723] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.518 [INFO][5723] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" iface="eth0" netns="" Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.518 [INFO][5723] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.518 [INFO][5723] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.537 [INFO][5730] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.538 [INFO][5730] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.538 [INFO][5730] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.546 [WARNING][5730] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.546 [INFO][5730] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.548 [INFO][5730] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:16.551357 containerd[1696]: 2024-12-13 01:30:16.549 [INFO][5723] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:16.552193 containerd[1696]: time="2024-12-13T01:30:16.551385472Z" level=info msg="TearDown network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\" successfully" Dec 13 01:30:16.552193 containerd[1696]: time="2024-12-13T01:30:16.551410512Z" level=info msg="StopPodSandbox for \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\" returns successfully" Dec 13 01:30:16.552334 containerd[1696]: time="2024-12-13T01:30:16.552303872Z" level=info msg="RemovePodSandbox for \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\"" Dec 13 01:30:16.552485 containerd[1696]: time="2024-12-13T01:30:16.552389912Z" level=info msg="Forcibly stopping sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\"" Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.598 [WARNING][5748] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a57b7da1-6e0a-4e25-9020-f599a3a71d0b", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"6ef438a540089be7b917a555170e56911aa71bbaaf771ad99817f5036cc8ab1b", Pod:"csi-node-driver-z7bfm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.92.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicd1287a3e7b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.599 [INFO][5748] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.599 [INFO][5748] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" iface="eth0" netns="" Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.599 [INFO][5748] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.599 [INFO][5748] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.617 [INFO][5755] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.617 [INFO][5755] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.617 [INFO][5755] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.626 [WARNING][5755] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.626 [INFO][5755] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" HandleID="k8s-pod-network.cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-csi--node--driver--z7bfm-eth0" Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.627 [INFO][5755] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:16.631096 containerd[1696]: 2024-12-13 01:30:16.629 [INFO][5748] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837" Dec 13 01:30:16.632096 containerd[1696]: time="2024-12-13T01:30:16.631577110Z" level=info msg="TearDown network for sandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\" successfully" Dec 13 01:30:16.642859 containerd[1696]: time="2024-12-13T01:30:16.642809550Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:16.642991 containerd[1696]: time="2024-12-13T01:30:16.642884030Z" level=info msg="RemovePodSandbox \"cba2b158ae734185622bf07a3c884b9cfc3ac68708a64690dcbf52d8ac6cb837\" returns successfully" Dec 13 01:30:16.643705 containerd[1696]: time="2024-12-13T01:30:16.643413150Z" level=info msg="StopPodSandbox for \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\"" Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.707 [WARNING][5774] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0", GenerateName:"calico-apiserver-6cd5658d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"fba0a9d1-bb75-416a-9f26-d9707b860c3d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd5658d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d", Pod:"calico-apiserver-6cd5658d56-mlhmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3541c067069", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.708 [INFO][5774] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.708 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" iface="eth0" netns="" Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.708 [INFO][5774] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.708 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.729 [INFO][5782] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.729 [INFO][5782] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.729 [INFO][5782] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.737 [WARNING][5782] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.737 [INFO][5782] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.739 [INFO][5782] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:16.743461 containerd[1696]: 2024-12-13 01:30:16.741 [INFO][5774] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:16.743461 containerd[1696]: time="2024-12-13T01:30:16.742617268Z" level=info msg="TearDown network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\" successfully" Dec 13 01:30:16.743461 containerd[1696]: time="2024-12-13T01:30:16.742642788Z" level=info msg="StopPodSandbox for \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\" returns successfully" Dec 13 01:30:16.743461 containerd[1696]: time="2024-12-13T01:30:16.742997468Z" level=info msg="RemovePodSandbox for \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\"" Dec 13 01:30:16.743461 containerd[1696]: time="2024-12-13T01:30:16.743024108Z" level=info msg="Forcibly stopping sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\"" Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.780 [WARNING][5800] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0", GenerateName:"calico-apiserver-6cd5658d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"fba0a9d1-bb75-416a-9f26-d9707b860c3d", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd5658d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"c6d52d8bd0363b1e8c1deac91a602b51e6d993dc0558cb3e9add1a5582cead6d", Pod:"calico-apiserver-6cd5658d56-mlhmd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3541c067069", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.780 [INFO][5800] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.780 [INFO][5800] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" iface="eth0" netns="" Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.780 [INFO][5800] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.780 [INFO][5800] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.802 [INFO][5807] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.802 [INFO][5807] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.802 [INFO][5807] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.810 [WARNING][5807] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.810 [INFO][5807] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" HandleID="k8s-pod-network.1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--mlhmd-eth0" Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.811 [INFO][5807] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:16.814471 containerd[1696]: 2024-12-13 01:30:16.813 [INFO][5800] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94" Dec 13 01:30:16.814471 containerd[1696]: time="2024-12-13T01:30:16.814555506Z" level=info msg="TearDown network for sandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\" successfully" Dec 13 01:30:16.822109 containerd[1696]: time="2024-12-13T01:30:16.822063706Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:16.822222 containerd[1696]: time="2024-12-13T01:30:16.822145626Z" level=info msg="RemovePodSandbox \"1cfb90be26ef308d60ae2eca7c9b4a0e48b1ca798afcf26e345bebdcce377e94\" returns successfully" Dec 13 01:30:16.822847 containerd[1696]: time="2024-12-13T01:30:16.822624666Z" level=info msg="StopPodSandbox for \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\"" Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.858 [WARNING][5825] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6371d89c-b939-4616-95e1-8dfb9a85f504", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd", Pod:"coredns-7db6d8ff4d-9mvzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali239ff68261f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.858 [INFO][5825] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.858 [INFO][5825] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" iface="eth0" netns="" Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.858 [INFO][5825] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.858 [INFO][5825] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.877 [INFO][5831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.877 [INFO][5831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.877 [INFO][5831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.885 [WARNING][5831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.885 [INFO][5831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.887 [INFO][5831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:16.889960 containerd[1696]: 2024-12-13 01:30:16.888 [INFO][5825] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:16.890379 containerd[1696]: time="2024-12-13T01:30:16.890007104Z" level=info msg="TearDown network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\" successfully" Dec 13 01:30:16.890379 containerd[1696]: time="2024-12-13T01:30:16.890030624Z" level=info msg="StopPodSandbox for \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\" returns successfully" Dec 13 01:30:16.890510 containerd[1696]: time="2024-12-13T01:30:16.890477704Z" level=info msg="RemovePodSandbox for \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\"" Dec 13 01:30:16.890552 containerd[1696]: time="2024-12-13T01:30:16.890511104Z" level=info msg="Forcibly stopping sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\"" Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.924 [WARNING][5849] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6371d89c-b939-4616-95e1-8dfb9a85f504", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"735e98d6dc87ada56bc50b00625e56d6e430025f1974b8739f6de423ee8f16fd", Pod:"coredns-7db6d8ff4d-9mvzk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali239ff68261f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.924 [INFO][5849] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.924 [INFO][5849] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" iface="eth0" netns="" Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.924 [INFO][5849] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.924 [INFO][5849] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.944 [INFO][5855] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.944 [INFO][5855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.944 [INFO][5855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.951 [WARNING][5855] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.952 [INFO][5855] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" HandleID="k8s-pod-network.86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--9mvzk-eth0" Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.953 [INFO][5855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:16.955882 containerd[1696]: 2024-12-13 01:30:16.954 [INFO][5849] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f" Dec 13 01:30:16.956277 containerd[1696]: time="2024-12-13T01:30:16.955927183Z" level=info msg="TearDown network for sandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\" successfully" Dec 13 01:30:16.965348 containerd[1696]: time="2024-12-13T01:30:16.965304103Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:16.965458 containerd[1696]: time="2024-12-13T01:30:16.965388583Z" level=info msg="RemovePodSandbox \"86f6440075e760e1df14da9b85e5cc68dcc6fa70e3178ec60edf9fcc7dca524f\" returns successfully" Dec 13 01:30:16.965976 containerd[1696]: time="2024-12-13T01:30:16.965950383Z" level=info msg="StopPodSandbox for \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\"" Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:16.999 [WARNING][5873] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0", GenerateName:"calico-kube-controllers-75778b7f7b-", Namespace:"calico-system", SelfLink:"", UID:"a04b6ce3-5f74-4242-b4cb-d590307c3dbb", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75778b7f7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff", Pod:"calico-kube-controllers-75778b7f7b-qcllh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid4c1c8b2ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:16.999 [INFO][5873] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:16.999 [INFO][5873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" iface="eth0" netns="" Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:16.999 [INFO][5873] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:16.999 [INFO][5873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:17.017 [INFO][5879] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:17.017 [INFO][5879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:17.017 [INFO][5879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:17.025 [WARNING][5879] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:17.025 [INFO][5879] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:17.026 [INFO][5879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.030094 containerd[1696]: 2024-12-13 01:30:17.028 [INFO][5873] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:17.030094 containerd[1696]: time="2024-12-13T01:30:17.030061221Z" level=info msg="TearDown network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\" successfully" Dec 13 01:30:17.030094 containerd[1696]: time="2024-12-13T01:30:17.030086901Z" level=info msg="StopPodSandbox for \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\" returns successfully" Dec 13 01:30:17.031359 containerd[1696]: time="2024-12-13T01:30:17.030826901Z" level=info msg="RemovePodSandbox for \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\"" Dec 13 01:30:17.031359 containerd[1696]: time="2024-12-13T01:30:17.030857181Z" level=info msg="Forcibly stopping sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\"" Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.071 [WARNING][5897] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0", GenerateName:"calico-kube-controllers-75778b7f7b-", Namespace:"calico-system", SelfLink:"", UID:"a04b6ce3-5f74-4242-b4cb-d590307c3dbb", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"75778b7f7b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"13507c54a057b0c38a58d30ee050aee20d14c934994f92ab39d28c5b4bfa1dff", Pod:"calico-kube-controllers-75778b7f7b-qcllh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.92.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid4c1c8b2ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.072 [INFO][5897] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.072 [INFO][5897] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" iface="eth0" netns="" Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.072 [INFO][5897] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.072 [INFO][5897] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.092 [INFO][5903] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.093 [INFO][5903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.093 [INFO][5903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.101 [WARNING][5903] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.101 [INFO][5903] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" HandleID="k8s-pod-network.657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--kube--controllers--75778b7f7b--qcllh-eth0" Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.102 [INFO][5903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.105531 containerd[1696]: 2024-12-13 01:30:17.104 [INFO][5897] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4" Dec 13 01:30:17.105954 containerd[1696]: time="2024-12-13T01:30:17.105572339Z" level=info msg="TearDown network for sandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\" successfully" Dec 13 01:30:17.115818 containerd[1696]: time="2024-12-13T01:30:17.115778179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:17.115918 containerd[1696]: time="2024-12-13T01:30:17.115848739Z" level=info msg="RemovePodSandbox \"657ec16345493b7390e0ca7647ba40a9175deaf47775cb4c399b11471a3211e4\" returns successfully" Dec 13 01:30:17.116519 containerd[1696]: time="2024-12-13T01:30:17.116274819Z" level=info msg="StopPodSandbox for \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\"" Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.160 [WARNING][5921] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f2830d02-b780-4c40-8169-2cb412cf67f7", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f", Pod:"coredns-7db6d8ff4d-qlsjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09ad80e31aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.161 [INFO][5921] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.161 [INFO][5921] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" iface="eth0" netns="" Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.161 [INFO][5921] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.161 [INFO][5921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.179 [INFO][5927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.179 [INFO][5927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.179 [INFO][5927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.187 [WARNING][5927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.187 [INFO][5927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.188 [INFO][5927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.191224 containerd[1696]: 2024-12-13 01:30:17.189 [INFO][5921] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:17.192305 containerd[1696]: time="2024-12-13T01:30:17.191262137Z" level=info msg="TearDown network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\" successfully" Dec 13 01:30:17.192305 containerd[1696]: time="2024-12-13T01:30:17.191287697Z" level=info msg="StopPodSandbox for \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\" returns successfully" Dec 13 01:30:17.192305 containerd[1696]: time="2024-12-13T01:30:17.191725817Z" level=info msg="RemovePodSandbox for \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\"" Dec 13 01:30:17.192305 containerd[1696]: time="2024-12-13T01:30:17.191754017Z" level=info msg="Forcibly stopping sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\"" Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.225 [WARNING][5946] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"f2830d02-b780-4c40-8169-2cb412cf67f7", ResourceVersion:"827", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"f6775921b6654910333d4a2016392326ae25e554b0a849e15b8942112115ae1f", Pod:"coredns-7db6d8ff4d-qlsjk", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.92.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali09ad80e31aa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.225 [INFO][5946] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.225 [INFO][5946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" iface="eth0" netns="" Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.225 [INFO][5946] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.225 [INFO][5946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.245 [INFO][5953] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.245 [INFO][5953] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.245 [INFO][5953] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.254 [WARNING][5953] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.254 [INFO][5953] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" HandleID="k8s-pod-network.ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-coredns--7db6d8ff4d--qlsjk-eth0" Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.255 [INFO][5953] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.258907 containerd[1696]: 2024-12-13 01:30:17.257 [INFO][5946] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98" Dec 13 01:30:17.259372 containerd[1696]: time="2024-12-13T01:30:17.258945136Z" level=info msg="TearDown network for sandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\" successfully" Dec 13 01:30:17.268429 containerd[1696]: time="2024-12-13T01:30:17.268373056Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:17.268507 containerd[1696]: time="2024-12-13T01:30:17.268464896Z" level=info msg="RemovePodSandbox \"ec1b76b862c635b0a60be0d11bdeea65692793dcd6806cb1f3696ad70668cb98\" returns successfully" Dec 13 01:30:17.269191 containerd[1696]: time="2024-12-13T01:30:17.268969816Z" level=info msg="StopPodSandbox for \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\"" Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.304 [WARNING][5971] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0", GenerateName:"calico-apiserver-6cd5658d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"7833e1a8-e541-4910-80f7-3565f077e0a5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd5658d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429", Pod:"calico-apiserver-6cd5658d56-plblc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc7db0247e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.304 [INFO][5971] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.304 [INFO][5971] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" iface="eth0" netns="" Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.304 [INFO][5971] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.304 [INFO][5971] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.324 [INFO][5978] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.324 [INFO][5978] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.324 [INFO][5978] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.332 [WARNING][5978] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.332 [INFO][5978] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.333 [INFO][5978] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.336762 containerd[1696]: 2024-12-13 01:30:17.335 [INFO][5971] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:17.336762 containerd[1696]: time="2024-12-13T01:30:17.336741454Z" level=info msg="TearDown network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\" successfully" Dec 13 01:30:17.337168 containerd[1696]: time="2024-12-13T01:30:17.336774974Z" level=info msg="StopPodSandbox for \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\" returns successfully" Dec 13 01:30:17.338106 containerd[1696]: time="2024-12-13T01:30:17.338075854Z" level=info msg="RemovePodSandbox for \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\"" Dec 13 01:30:17.338324 containerd[1696]: time="2024-12-13T01:30:17.338112254Z" level=info msg="Forcibly stopping sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\"" Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.371 [WARNING][5997] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0", GenerateName:"calico-apiserver-6cd5658d56-", Namespace:"calico-apiserver", SelfLink:"", UID:"7833e1a8-e541-4910-80f7-3565f077e0a5", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 29, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6cd5658d56", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-c1e94b9ee1", ContainerID:"d887a7a2c8b549215834de743df20e56b5f0737f1c35f24bb2bea61f61d2f429", Pod:"calico-apiserver-6cd5658d56-plblc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.92.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cc7db0247e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.371 [INFO][5997] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.371 [INFO][5997] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" iface="eth0" netns="" Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.371 [INFO][5997] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.371 [INFO][5997] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.389 [INFO][6003] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.390 [INFO][6003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.390 [INFO][6003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.400 [WARNING][6003] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.400 [INFO][6003] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" HandleID="k8s-pod-network.c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Workload="ci--4081.2.1--a--c1e94b9ee1-k8s-calico--apiserver--6cd5658d56--plblc-eth0" Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.405 [INFO][6003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:30:17.408313 containerd[1696]: 2024-12-13 01:30:17.406 [INFO][5997] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf" Dec 13 01:30:17.408313 containerd[1696]: time="2024-12-13T01:30:17.408229053Z" level=info msg="TearDown network for sandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\" successfully" Dec 13 01:30:17.418625 containerd[1696]: time="2024-12-13T01:30:17.418587612Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:30:17.418713 containerd[1696]: time="2024-12-13T01:30:17.418643532Z" level=info msg="RemovePodSandbox \"c9bc086e81f7daa889043b21cda0212ea3a273c76e7cf75173e6192f876227cf\" returns successfully" Dec 13 01:30:17.553085 update_engine[1668]: I20241213 01:30:17.553026 1668 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Dec 13 01:30:17.553085 update_engine[1668]: I20241213 01:30:17.553076 1668 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Dec 13 01:30:17.553486 update_engine[1668]: I20241213 01:30:17.553267 1668 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Dec 13 01:30:17.555598 update_engine[1668]: I20241213 01:30:17.555229 1668 omaha_request_params.cc:62] Current group set to stable Dec 13 01:30:17.555598 update_engine[1668]: I20241213 01:30:17.555325 1668 update_attempter.cc:499] Already updated boot flags. Skipping. Dec 13 01:30:17.555598 update_engine[1668]: I20241213 01:30:17.555335 1668 update_attempter.cc:643] Scheduling an action processor start. Dec 13 01:30:17.555598 update_engine[1668]: I20241213 01:30:17.555349 1668 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:30:17.555598 update_engine[1668]: I20241213 01:30:17.555374 1668 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Dec 13 01:30:17.555598 update_engine[1668]: I20241213 01:30:17.555428 1668 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:30:17.555598 update_engine[1668]: I20241213 01:30:17.555434 1668 omaha_request_action.cc:272] Request: Dec 13 01:30:17.555598 update_engine[1668]: Dec 13 01:30:17.555598 update_engine[1668]: Dec 13 01:30:17.555598 update_engine[1668]: Dec 13 01:30:17.555598 update_engine[1668]: Dec 13 01:30:17.555598 update_engine[1668]: Dec 13 01:30:17.555598 update_engine[1668]: Dec 13 01:30:17.555598 update_engine[1668]: Dec 13 01:30:17.555598 update_engine[1668]: Dec 13 01:30:17.555598 update_engine[1668]: I20241213 01:30:17.555440 1668 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:30:17.555982 locksmithd[1731]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Dec 13 01:30:17.556501 update_engine[1668]: I20241213 01:30:17.556459 1668 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:30:17.557477 update_engine[1668]: I20241213 01:30:17.557432 1668 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:30:17.575549 update_engine[1668]: E20241213 01:30:17.575510 1668 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:30:17.575619 update_engine[1668]: I20241213 01:30:17.575584 1668 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Dec 13 01:30:17.844781 kubelet[3180]: I1213 01:30:17.844572 3180 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:30:18.618600 systemd[1]: run-containerd-runc-k8s.io-0cef6aedfc416870b02ba9d93ba1dd3fcedb4142526bbd2c5f339a6105f9dbdc-runc.xCC1yI.mount: Deactivated successfully. Dec 13 01:30:27.560711 update_engine[1668]: I20241213 01:30:27.560493 1668 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:30:27.561016 update_engine[1668]: I20241213 01:30:27.560751 1668 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:30:27.561016 update_engine[1668]: I20241213 01:30:27.560976 1668 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:30:27.637729 update_engine[1668]: E20241213 01:30:27.637645 1668 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:30:27.637865 update_engine[1668]: I20241213 01:30:27.637760 1668 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Dec 13 01:30:31.609423 systemd[1]: run-containerd-runc-k8s.io-c3cf5ce76f5f317d7bf08ea217ab07b484229501ab1c45ec68190cec66bca082-runc.ELlHcI.mount: Deactivated successfully. Dec 13 01:30:37.559339 update_engine[1668]: I20241213 01:30:37.559274 1668 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:30:37.559795 update_engine[1668]: I20241213 01:30:37.559486 1668 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:30:37.559795 update_engine[1668]: I20241213 01:30:37.559697 1668 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:30:37.648733 update_engine[1668]: E20241213 01:30:37.648682 1668 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:30:37.648864 update_engine[1668]: I20241213 01:30:37.648762 1668 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Dec 13 01:30:47.560803 update_engine[1668]: I20241213 01:30:47.560734 1668 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:30:47.561140 update_engine[1668]: I20241213 01:30:47.560950 1668 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:30:47.561168 update_engine[1668]: I20241213 01:30:47.561147 1668 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:30:47.672650 update_engine[1668]: E20241213 01:30:47.671103 1668 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:30:47.672796 update_engine[1668]: I20241213 01:30:47.672698 1668 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:30:47.672796 update_engine[1668]: I20241213 01:30:47.672720 1668 omaha_request_action.cc:617] Omaha request response: Dec 13 01:30:47.672840 update_engine[1668]: E20241213 01:30:47.672794 1668 omaha_request_action.cc:636] Omaha request network transfer failed. Dec 13 01:30:47.672840 update_engine[1668]: I20241213 01:30:47.672811 1668 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Dec 13 01:30:47.672840 update_engine[1668]: I20241213 01:30:47.672816 1668 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:30:47.672840 update_engine[1668]: I20241213 01:30:47.672820 1668 update_attempter.cc:306] Processing Done. Dec 13 01:30:47.672840 update_engine[1668]: E20241213 01:30:47.672834 1668 update_attempter.cc:619] Update failed. Dec 13 01:30:47.673077 update_engine[1668]: I20241213 01:30:47.672840 1668 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Dec 13 01:30:47.673077 update_engine[1668]: I20241213 01:30:47.672845 1668 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Dec 13 01:30:47.673077 update_engine[1668]: I20241213 01:30:47.672850 1668 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Dec 13 01:30:47.673077 update_engine[1668]: I20241213 01:30:47.672919 1668 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Dec 13 01:30:47.673077 update_engine[1668]: I20241213 01:30:47.672938 1668 omaha_request_action.cc:271] Posting an Omaha request to disabled Dec 13 01:30:47.673077 update_engine[1668]: I20241213 01:30:47.672943 1668 omaha_request_action.cc:272] Request: Dec 13 01:30:47.673077 update_engine[1668]: Dec 13 01:30:47.673077 update_engine[1668]: Dec 13 01:30:47.673077 update_engine[1668]: Dec 13 01:30:47.673077 update_engine[1668]: Dec 13 01:30:47.673077 update_engine[1668]: Dec 13 01:30:47.673077 update_engine[1668]: Dec 13 01:30:47.673077 update_engine[1668]: I20241213 01:30:47.672948 1668 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Dec 13 01:30:47.673309 update_engine[1668]: I20241213 01:30:47.673082 1668 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Dec 13 01:30:47.673309 update_engine[1668]: I20241213 01:30:47.673278 1668 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Dec 13 01:30:47.673510 locksmithd[1731]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Dec 13 01:30:47.714783 update_engine[1668]: E20241213 01:30:47.714723 1668 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Dec 13 01:30:47.714921 update_engine[1668]: I20241213 01:30:47.714808 1668 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Dec 13 01:30:47.714921 update_engine[1668]: I20241213 01:30:47.714818 1668 omaha_request_action.cc:617] Omaha request response: Dec 13 01:30:47.714921 update_engine[1668]: I20241213 01:30:47.714824 1668 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:30:47.714921 update_engine[1668]: I20241213 01:30:47.714828 1668 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Dec 13 01:30:47.714921 update_engine[1668]: I20241213 01:30:47.714833 1668 update_attempter.cc:306] Processing Done. Dec 13 01:30:47.714921 update_engine[1668]: I20241213 01:30:47.714839 1668 update_attempter.cc:310] Error event sent. Dec 13 01:30:47.714921 update_engine[1668]: I20241213 01:30:47.714849 1668 update_check_scheduler.cc:74] Next update check in 42m10s Dec 13 01:30:47.715232 locksmithd[1731]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Dec 13 01:30:48.615002 systemd[1]: run-containerd-runc-k8s.io-0cef6aedfc416870b02ba9d93ba1dd3fcedb4142526bbd2c5f339a6105f9dbdc-runc.htcaDC.mount: Deactivated successfully. Dec 13 01:31:10.546982 systemd[1]: Started sshd@7-10.200.20.11:22-10.200.16.10:44562.service - OpenSSH per-connection server daemon (10.200.16.10:44562). Dec 13 01:31:10.958906 sshd[6146]: Accepted publickey for core from 10.200.16.10 port 44562 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:10.960883 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:10.965790 systemd-logind[1661]: New session 10 of user core. Dec 13 01:31:10.970858 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:31:11.361309 sshd[6146]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:11.364964 systemd[1]: sshd@7-10.200.20.11:22-10.200.16.10:44562.service: Deactivated successfully. Dec 13 01:31:11.367488 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:31:11.368391 systemd-logind[1661]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:31:11.370733 systemd-logind[1661]: Removed session 10. Dec 13 01:31:16.438844 systemd[1]: Started sshd@8-10.200.20.11:22-10.200.16.10:44574.service - OpenSSH per-connection server daemon (10.200.16.10:44574). Dec 13 01:31:16.869855 sshd[6160]: Accepted publickey for core from 10.200.16.10 port 44574 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:16.871382 sshd[6160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:16.876316 systemd-logind[1661]: New session 11 of user core. Dec 13 01:31:16.883880 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:31:17.252913 sshd[6160]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:17.256625 systemd[1]: sshd@8-10.200.20.11:22-10.200.16.10:44574.service: Deactivated successfully. Dec 13 01:31:17.259369 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:31:17.261291 systemd-logind[1661]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:31:17.263463 systemd-logind[1661]: Removed session 11. Dec 13 01:31:22.344990 systemd[1]: Started sshd@9-10.200.20.11:22-10.200.16.10:55516.service - OpenSSH per-connection server daemon (10.200.16.10:55516). Dec 13 01:31:22.759120 sshd[6214]: Accepted publickey for core from 10.200.16.10 port 55516 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:22.761017 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:22.765365 systemd-logind[1661]: New session 12 of user core. Dec 13 01:31:22.774872 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:31:23.141067 sshd[6214]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:23.145342 systemd[1]: sshd@9-10.200.20.11:22-10.200.16.10:55516.service: Deactivated successfully. Dec 13 01:31:23.147859 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:31:23.148717 systemd-logind[1661]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:31:23.149582 systemd-logind[1661]: Removed session 12. Dec 13 01:31:23.219983 systemd[1]: Started sshd@10-10.200.20.11:22-10.200.16.10:55532.service - OpenSSH per-connection server daemon (10.200.16.10:55532). Dec 13 01:31:23.632148 sshd[6229]: Accepted publickey for core from 10.200.16.10 port 55532 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:23.635203 sshd[6229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:23.643004 systemd-logind[1661]: New session 13 of user core. Dec 13 01:31:23.651917 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:31:24.038133 sshd[6229]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:24.042940 systemd[1]: sshd@10-10.200.20.11:22-10.200.16.10:55532.service: Deactivated successfully. Dec 13 01:31:24.045480 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:31:24.046200 systemd-logind[1661]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:31:24.047541 systemd-logind[1661]: Removed session 13. Dec 13 01:31:24.120947 systemd[1]: Started sshd@11-10.200.20.11:22-10.200.16.10:55542.service - OpenSSH per-connection server daemon (10.200.16.10:55542). Dec 13 01:31:24.548151 sshd[6239]: Accepted publickey for core from 10.200.16.10 port 55542 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:24.549812 sshd[6239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:24.554626 systemd-logind[1661]: New session 14 of user core. Dec 13 01:31:24.560859 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:31:24.928027 sshd[6239]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:24.932306 systemd-logind[1661]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:31:24.933362 systemd[1]: sshd@11-10.200.20.11:22-10.200.16.10:55542.service: Deactivated successfully. Dec 13 01:31:24.936628 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:31:24.939397 systemd-logind[1661]: Removed session 14. Dec 13 01:31:30.010940 systemd[1]: Started sshd@12-10.200.20.11:22-10.200.16.10:58154.service - OpenSSH per-connection server daemon (10.200.16.10:58154). Dec 13 01:31:30.417910 sshd[6264]: Accepted publickey for core from 10.200.16.10 port 58154 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:30.419588 sshd[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:30.424725 systemd-logind[1661]: New session 15 of user core. Dec 13 01:31:30.428959 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:31:30.808821 sshd[6264]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:30.812079 systemd[1]: sshd@12-10.200.20.11:22-10.200.16.10:58154.service: Deactivated successfully. Dec 13 01:31:30.816274 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:31:30.818595 systemd-logind[1661]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:31:30.820672 systemd-logind[1661]: Removed session 15. Dec 13 01:31:35.891023 systemd[1]: Started sshd@13-10.200.20.11:22-10.200.16.10:58162.service - OpenSSH per-connection server daemon (10.200.16.10:58162). Dec 13 01:31:36.313552 sshd[6300]: Accepted publickey for core from 10.200.16.10 port 58162 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:36.314815 sshd[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:36.320343 systemd-logind[1661]: New session 16 of user core. Dec 13 01:31:36.325909 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:31:36.688932 sshd[6300]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:36.693133 systemd[1]: sshd@13-10.200.20.11:22-10.200.16.10:58162.service: Deactivated successfully. Dec 13 01:31:36.696143 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:31:36.698028 systemd-logind[1661]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:31:36.699276 systemd-logind[1661]: Removed session 16. Dec 13 01:31:41.789255 systemd[1]: Started sshd@14-10.200.20.11:22-10.200.16.10:36160.service - OpenSSH per-connection server daemon (10.200.16.10:36160). Dec 13 01:31:42.233280 sshd[6313]: Accepted publickey for core from 10.200.16.10 port 36160 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:42.235263 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:42.242897 systemd-logind[1661]: New session 17 of user core. Dec 13 01:31:42.249185 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:31:42.648975 sshd[6313]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:42.653741 systemd[1]: sshd@14-10.200.20.11:22-10.200.16.10:36160.service: Deactivated successfully. Dec 13 01:31:42.655893 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:31:42.658415 systemd-logind[1661]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:31:42.659897 systemd-logind[1661]: Removed session 17. Dec 13 01:31:47.730197 systemd[1]: Started sshd@15-10.200.20.11:22-10.200.16.10:36176.service - OpenSSH per-connection server daemon (10.200.16.10:36176). Dec 13 01:31:48.152453 sshd[6343]: Accepted publickey for core from 10.200.16.10 port 36176 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:48.154206 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:48.159612 systemd-logind[1661]: New session 18 of user core. Dec 13 01:31:48.164865 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:31:48.520253 sshd[6343]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:48.523457 systemd-logind[1661]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:31:48.524166 systemd[1]: sshd@15-10.200.20.11:22-10.200.16.10:36176.service: Deactivated successfully. Dec 13 01:31:48.526302 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:31:48.528109 systemd-logind[1661]: Removed session 18. Dec 13 01:31:48.599096 systemd[1]: Started sshd@16-10.200.20.11:22-10.200.16.10:40582.service - OpenSSH per-connection server daemon (10.200.16.10:40582). Dec 13 01:31:48.619749 systemd[1]: run-containerd-runc-k8s.io-0cef6aedfc416870b02ba9d93ba1dd3fcedb4142526bbd2c5f339a6105f9dbdc-runc.JNebSM.mount: Deactivated successfully. Dec 13 01:31:49.023682 sshd[6356]: Accepted publickey for core from 10.200.16.10 port 40582 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:49.025868 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:49.031920 systemd-logind[1661]: New session 19 of user core. Dec 13 01:31:49.040917 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:31:49.501122 sshd[6356]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:49.505526 systemd[1]: sshd@16-10.200.20.11:22-10.200.16.10:40582.service: Deactivated successfully. Dec 13 01:31:49.509005 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:31:49.510180 systemd-logind[1661]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:31:49.511285 systemd-logind[1661]: Removed session 19. Dec 13 01:31:49.583956 systemd[1]: Started sshd@17-10.200.20.11:22-10.200.16.10:40584.service - OpenSSH per-connection server daemon (10.200.16.10:40584). Dec 13 01:31:50.021547 sshd[6387]: Accepted publickey for core from 10.200.16.10 port 40584 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:50.023574 sshd[6387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:50.028033 systemd-logind[1661]: New session 20 of user core. Dec 13 01:31:50.034101 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:31:52.038090 sshd[6387]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:52.042521 systemd[1]: sshd@17-10.200.20.11:22-10.200.16.10:40584.service: Deactivated successfully. Dec 13 01:31:52.045497 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:31:52.047180 systemd-logind[1661]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:31:52.048571 systemd-logind[1661]: Removed session 20. Dec 13 01:31:52.116437 systemd[1]: Started sshd@18-10.200.20.11:22-10.200.16.10:40586.service - OpenSSH per-connection server daemon (10.200.16.10:40586). Dec 13 01:31:52.550731 sshd[6406]: Accepted publickey for core from 10.200.16.10 port 40586 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:52.552407 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:52.556277 systemd-logind[1661]: New session 21 of user core. Dec 13 01:31:52.562839 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:31:53.051833 sshd[6406]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:53.055967 systemd[1]: sshd@18-10.200.20.11:22-10.200.16.10:40586.service: Deactivated successfully. Dec 13 01:31:53.058077 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:31:53.058896 systemd-logind[1661]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:31:53.059923 systemd-logind[1661]: Removed session 21. Dec 13 01:31:53.130969 systemd[1]: Started sshd@19-10.200.20.11:22-10.200.16.10:40600.service - OpenSSH per-connection server daemon (10.200.16.10:40600). Dec 13 01:31:53.539788 sshd[6416]: Accepted publickey for core from 10.200.16.10 port 40600 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:53.541471 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:53.547155 systemd-logind[1661]: New session 22 of user core. Dec 13 01:31:53.551940 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:31:53.928014 sshd[6416]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:53.932237 systemd[1]: sshd@19-10.200.20.11:22-10.200.16.10:40600.service: Deactivated successfully. Dec 13 01:31:53.934518 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:31:53.935507 systemd-logind[1661]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:31:53.936699 systemd-logind[1661]: Removed session 22. Dec 13 01:31:59.004994 systemd[1]: Started sshd@20-10.200.20.11:22-10.200.16.10:34604.service - OpenSSH per-connection server daemon (10.200.16.10:34604). Dec 13 01:31:59.410551 sshd[6432]: Accepted publickey for core from 10.200.16.10 port 34604 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:31:59.412007 sshd[6432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:31:59.416335 systemd-logind[1661]: New session 23 of user core. Dec 13 01:31:59.424837 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:31:59.793941 sshd[6432]: pam_unix(sshd:session): session closed for user core Dec 13 01:31:59.797705 systemd[1]: sshd@20-10.200.20.11:22-10.200.16.10:34604.service: Deactivated successfully. Dec 13 01:31:59.800289 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:31:59.801873 systemd-logind[1661]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:31:59.803104 systemd-logind[1661]: Removed session 23. Dec 13 01:32:04.870396 systemd[1]: Started sshd@21-10.200.20.11:22-10.200.16.10:34614.service - OpenSSH per-connection server daemon (10.200.16.10:34614). Dec 13 01:32:05.293856 sshd[6469]: Accepted publickey for core from 10.200.16.10 port 34614 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:05.295351 sshd[6469]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:05.300932 systemd-logind[1661]: New session 24 of user core. Dec 13 01:32:05.309845 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:32:05.655423 sshd[6469]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:05.659267 systemd[1]: sshd@21-10.200.20.11:22-10.200.16.10:34614.service: Deactivated successfully. Dec 13 01:32:05.661215 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:32:05.662043 systemd-logind[1661]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:32:05.662963 systemd-logind[1661]: Removed session 24. Dec 13 01:32:10.734994 systemd[1]: Started sshd@22-10.200.20.11:22-10.200.16.10:51794.service - OpenSSH per-connection server daemon (10.200.16.10:51794). Dec 13 01:32:11.140787 sshd[6482]: Accepted publickey for core from 10.200.16.10 port 51794 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:11.142426 sshd[6482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:11.147280 systemd-logind[1661]: New session 25 of user core. Dec 13 01:32:11.153853 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:32:11.499751 sshd[6482]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:11.502878 systemd[1]: sshd@22-10.200.20.11:22-10.200.16.10:51794.service: Deactivated successfully. Dec 13 01:32:11.506167 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:32:11.507889 systemd-logind[1661]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:32:11.508798 systemd-logind[1661]: Removed session 25. Dec 13 01:32:16.588100 systemd[1]: Started sshd@23-10.200.20.11:22-10.200.16.10:51798.service - OpenSSH per-connection server daemon (10.200.16.10:51798). Dec 13 01:32:17.018793 sshd[6497]: Accepted publickey for core from 10.200.16.10 port 51798 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:17.020250 sshd[6497]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:17.024644 systemd-logind[1661]: New session 26 of user core. Dec 13 01:32:17.030899 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:32:17.399917 sshd[6497]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:17.403576 systemd[1]: sshd@23-10.200.20.11:22-10.200.16.10:51798.service: Deactivated successfully. Dec 13 01:32:17.405465 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:32:17.406191 systemd-logind[1661]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:32:17.407019 systemd-logind[1661]: Removed session 26. Dec 13 01:32:22.490035 systemd[1]: Started sshd@24-10.200.20.11:22-10.200.16.10:35884.service - OpenSSH per-connection server daemon (10.200.16.10:35884). Dec 13 01:32:22.919756 sshd[6549]: Accepted publickey for core from 10.200.16.10 port 35884 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:22.921345 sshd[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:22.925482 systemd-logind[1661]: New session 27 of user core. Dec 13 01:32:22.931885 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:32:23.294435 sshd[6549]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:23.297834 systemd[1]: sshd@24-10.200.20.11:22-10.200.16.10:35884.service: Deactivated successfully. Dec 13 01:32:23.300507 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:32:23.302576 systemd-logind[1661]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:32:23.304057 systemd-logind[1661]: Removed session 27. Dec 13 01:32:28.375973 systemd[1]: Started sshd@25-10.200.20.11:22-10.200.16.10:35888.service - OpenSSH per-connection server daemon (10.200.16.10:35888). Dec 13 01:32:28.802475 sshd[6561]: Accepted publickey for core from 10.200.16.10 port 35888 ssh2: RSA SHA256:bxnIRgSnix5zohfLN0WtV6Jla9y31Yo8MLFZ+P1QFxA Dec 13 01:32:28.803837 sshd[6561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:32:28.808291 systemd-logind[1661]: New session 28 of user core. Dec 13 01:32:28.813849 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:32:29.180740 sshd[6561]: pam_unix(sshd:session): session closed for user core Dec 13 01:32:29.184810 systemd[1]: sshd@25-10.200.20.11:22-10.200.16.10:35888.service: Deactivated successfully. Dec 13 01:32:29.188202 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:32:29.191966 systemd-logind[1661]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:32:29.195409 systemd-logind[1661]: Removed session 28.