Jan 17 12:05:59.297438 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 12:05:59.297460 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:05:59.297468 kernel: KASLR enabled Jan 17 12:05:59.297474 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 12:05:59.297481 kernel: printk: bootconsole [pl11] enabled Jan 17 12:05:59.297487 kernel: efi: EFI v2.7 by EDK II Jan 17 12:05:59.297494 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 12:05:59.297500 kernel: random: crng init done Jan 17 12:05:59.297506 kernel: ACPI: Early table checksum verification disabled Jan 17 12:05:59.297537 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 12:05:59.297543 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297549 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297557 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 12:05:59.297563 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297570 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297576 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297583 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297591 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297598 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297604 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 12:05:59.297610 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297617 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 12:05:59.297623 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 12:05:59.297630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 12:05:59.297636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 12:05:59.297642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 12:05:59.297649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 12:05:59.297655 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 12:05:59.297663 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 12:05:59.297669 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 12:05:59.297676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 12:05:59.297682 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 12:05:59.297689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 12:05:59.297695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 12:05:59.297701 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 17 12:05:59.297707 kernel: Zone ranges: Jan 17 12:05:59.297714 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 12:05:59.297720 kernel: DMA32 empty Jan 17 12:05:59.297727 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 12:05:59.297733 kernel: Movable zone start for each node Jan 17 12:05:59.297744 kernel: Early memory node ranges Jan 17 12:05:59.297751 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 12:05:59.297757 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 12:05:59.297764 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 12:05:59.297771 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 12:05:59.297779 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 12:05:59.297785 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 12:05:59.297792 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 12:05:59.297799 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 12:05:59.297806 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 12:05:59.297812 kernel: psci: probing for conduit method from ACPI. Jan 17 12:05:59.297819 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 12:05:59.297826 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:05:59.297832 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 12:05:59.297839 kernel: psci: SMC Calling Convention v1.4 Jan 17 12:05:59.297846 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 12:05:59.297852 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 12:05:59.297861 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:05:59.297867 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:05:59.297874 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:05:59.297881 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:05:59.297888 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:05:59.297894 kernel: CPU features: detected: Hardware dirty bit management Jan 17 12:05:59.297901 kernel: CPU features: detected: Spectre-BHB Jan 17 12:05:59.297908 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 12:05:59.297914 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 12:05:59.297921 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 12:05:59.297928 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 12:05:59.297936 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 12:05:59.297943 kernel: alternatives: applying boot alternatives Jan 17 12:05:59.297951 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:05:59.297958 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:05:59.297965 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:05:59.297972 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:05:59.297978 kernel: Fallback order for Node 0: 0 Jan 17 12:05:59.297985 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 12:05:59.297992 kernel: Policy zone: Normal Jan 17 12:05:59.297999 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:05:59.298005 kernel: software IO TLB: area num 2. Jan 17 12:05:59.298014 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 12:05:59.298021 kernel: Memory: 3982752K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211408K reserved, 0K cma-reserved) Jan 17 12:05:59.298028 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:05:59.298034 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:05:59.298042 kernel: rcu: RCU event tracing is enabled. Jan 17 12:05:59.298048 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:05:59.298055 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:05:59.298062 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:05:59.298069 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:05:59.298076 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:05:59.298082 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:05:59.298091 kernel: GICv3: 960 SPIs implemented Jan 17 12:05:59.298097 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:05:59.298104 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:05:59.298111 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 12:05:59.298117 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 12:05:59.298124 kernel: ITS: No ITS available, not enabling LPIs Jan 17 12:05:59.298131 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:05:59.298138 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:05:59.298145 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 12:05:59.298151 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 12:05:59.298158 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 12:05:59.298166 kernel: Console: colour dummy device 80x25 Jan 17 12:05:59.298174 kernel: printk: console [tty1] enabled Jan 17 12:05:59.298181 kernel: ACPI: Core revision 20230628 Jan 17 12:05:59.298188 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 12:05:59.298195 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:05:59.298202 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:05:59.298209 kernel: landlock: Up and running. Jan 17 12:05:59.298215 kernel: SELinux: Initializing. Jan 17 12:05:59.298222 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:05:59.298229 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:05:59.298238 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:05:59.298245 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:05:59.298253 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 17 12:05:59.298259 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 17 12:05:59.298266 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 12:05:59.298273 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:05:59.298280 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:05:59.298294 kernel: Remapping and enabling EFI services. Jan 17 12:05:59.298301 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:05:59.298308 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:05:59.298316 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 12:05:59.298325 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:05:59.298332 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 12:05:59.298339 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:05:59.298346 kernel: SMP: Total of 2 processors activated. Jan 17 12:05:59.298353 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:05:59.298363 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 12:05:59.298370 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 12:05:59.298377 kernel: CPU features: detected: CRC32 instructions Jan 17 12:05:59.298385 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 12:05:59.298392 kernel: CPU features: detected: LSE atomic instructions Jan 17 12:05:59.298399 kernel: CPU features: detected: Privileged Access Never Jan 17 12:05:59.298406 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:05:59.298413 kernel: alternatives: applying system-wide alternatives Jan 17 12:05:59.298420 kernel: devtmpfs: initialized Jan 17 12:05:59.298429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:05:59.298437 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:05:59.298444 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:05:59.298451 kernel: SMBIOS 3.1.0 present. Jan 17 12:05:59.298458 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 12:05:59.298466 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:05:59.298473 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:05:59.298480 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:05:59.298488 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:05:59.298496 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:05:59.298504 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 12:05:59.298521 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:05:59.298529 kernel: cpuidle: using governor menu Jan 17 12:05:59.298536 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:05:59.298543 kernel: ASID allocator initialised with 32768 entries Jan 17 12:05:59.298551 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:05:59.298558 kernel: Serial: AMBA PL011 UART driver Jan 17 12:05:59.298566 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 12:05:59.298574 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 12:05:59.298582 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:05:59.298589 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:05:59.298596 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:05:59.298604 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:05:59.298611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:05:59.298618 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:05:59.298625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:05:59.298633 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:05:59.298641 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:05:59.298649 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:05:59.298656 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:05:59.298663 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:05:59.298670 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:05:59.298678 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:05:59.298685 kernel: ACPI: Interpreter enabled Jan 17 12:05:59.298692 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:05:59.298699 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 12:05:59.298708 kernel: printk: console [ttyAMA0] enabled Jan 17 12:05:59.298715 kernel: printk: bootconsole [pl11] disabled Jan 17 12:05:59.298722 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 12:05:59.298730 kernel: iommu: Default domain type: Translated Jan 17 12:05:59.298737 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:05:59.298744 kernel: efivars: Registered efivars operations Jan 17 12:05:59.298752 kernel: vgaarb: loaded Jan 17 12:05:59.298759 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:05:59.298766 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:05:59.298775 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:05:59.298782 kernel: pnp: PnP ACPI init Jan 17 12:05:59.298790 kernel: pnp: PnP ACPI: found 0 devices Jan 17 12:05:59.298797 kernel: NET: Registered PF_INET protocol family Jan 17 12:05:59.298804 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:05:59.298812 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:05:59.298819 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:05:59.298826 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:05:59.298834 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:05:59.298843 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:05:59.298850 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:05:59.298858 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:05:59.298865 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:05:59.298872 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:05:59.298879 kernel: kvm [1]: HYP mode not available Jan 17 12:05:59.298887 kernel: Initialise system trusted keyrings Jan 17 12:05:59.298894 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:05:59.298901 kernel: Key type asymmetric registered Jan 17 12:05:59.298910 kernel: Asymmetric key parser 'x509' registered Jan 17 12:05:59.298917 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:05:59.298924 kernel: io scheduler mq-deadline registered Jan 17 12:05:59.298932 kernel: io scheduler kyber registered Jan 17 12:05:59.298939 kernel: io scheduler bfq registered Jan 17 12:05:59.298946 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:05:59.298953 kernel: thunder_xcv, ver 1.0 Jan 17 12:05:59.298960 kernel: thunder_bgx, ver 1.0 Jan 17 12:05:59.298968 kernel: nicpf, ver 1.0 Jan 17 12:05:59.298975 kernel: nicvf, ver 1.0 Jan 17 12:05:59.299110 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:05:59.299183 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:05:58 UTC (1737115558) Jan 17 12:05:59.299193 kernel: efifb: probing for efifb Jan 17 12:05:59.299201 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 12:05:59.299209 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 12:05:59.299216 kernel: efifb: scrolling: redraw Jan 17 12:05:59.299223 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 12:05:59.299233 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:05:59.299240 kernel: fb0: EFI VGA frame buffer device Jan 17 12:05:59.299248 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 12:05:59.299255 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:05:59.299262 kernel: No ACPI PMU IRQ for CPU0 Jan 17 12:05:59.299269 kernel: No ACPI PMU IRQ for CPU1 Jan 17 12:05:59.299276 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 17 12:05:59.299284 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:05:59.299291 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:05:59.299300 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:05:59.299307 kernel: Segment Routing with IPv6 Jan 17 12:05:59.299314 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:05:59.299321 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:05:59.299328 kernel: Key type dns_resolver registered Jan 17 12:05:59.299335 kernel: registered taskstats version 1 Jan 17 12:05:59.299343 kernel: Loading compiled-in X.509 certificates Jan 17 12:05:59.299350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:05:59.299357 kernel: Key type .fscrypt registered Jan 17 12:05:59.299366 kernel: Key type fscrypt-provisioning registered Jan 17 12:05:59.299373 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:05:59.299381 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:05:59.299388 kernel: ima: No architecture policies found Jan 17 12:05:59.299395 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:05:59.299402 kernel: clk: Disabling unused clocks Jan 17 12:05:59.299410 kernel: Freeing unused kernel memory: 39360K Jan 17 12:05:59.299417 kernel: Run /init as init process Jan 17 12:05:59.299424 kernel: with arguments: Jan 17 12:05:59.299432 kernel: /init Jan 17 12:05:59.299440 kernel: with environment: Jan 17 12:05:59.299447 kernel: HOME=/ Jan 17 12:05:59.299454 kernel: TERM=linux Jan 17 12:05:59.299461 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:05:59.299470 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:05:59.299479 systemd[1]: Detected virtualization microsoft. Jan 17 12:05:59.299487 systemd[1]: Detected architecture arm64. Jan 17 12:05:59.299496 systemd[1]: Running in initrd. Jan 17 12:05:59.299504 systemd[1]: No hostname configured, using default hostname. Jan 17 12:05:59.299522 systemd[1]: Hostname set to . Jan 17 12:05:59.299530 systemd[1]: Initializing machine ID from random generator. Jan 17 12:05:59.299538 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:05:59.299546 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:05:59.299554 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:05:59.299562 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:05:59.299572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:05:59.299580 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:05:59.299588 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:05:59.299597 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:05:59.299605 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:05:59.299613 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:05:59.299621 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:05:59.299631 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:05:59.299638 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:05:59.299646 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:05:59.299654 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:05:59.299662 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:05:59.299670 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:05:59.299678 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:05:59.299686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:05:59.299695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:05:59.299703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:05:59.299711 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:05:59.299719 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:05:59.299726 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:05:59.299734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:05:59.299742 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:05:59.299750 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:05:59.299758 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:05:59.299767 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:05:59.299790 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 12:05:59.299810 systemd-journald[217]: Journal started Jan 17 12:05:59.299830 systemd-journald[217]: Runtime Journal (/run/log/journal/9effeb55e68142d5b3f38750d7425937) is 8.0M, max 78.5M, 70.5M free. Jan 17 12:05:59.304595 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 12:05:59.326529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:05:59.331280 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 12:05:59.343361 kernel: Bridge firewalling registered Jan 17 12:05:59.343386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:05:59.363198 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:05:59.363885 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:05:59.370351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:05:59.390608 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:05:59.396535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:05:59.410535 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:05:59.437163 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:05:59.447732 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:05:59.465732 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:05:59.496671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:05:59.512678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:05:59.523873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:05:59.537200 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:05:59.549833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:05:59.576084 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:05:59.591180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:05:59.612218 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:05:59.638930 dracut-cmdline[249]: dracut-dracut-053 Jan 17 12:05:59.638930 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:05:59.637785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:05:59.692983 systemd-resolved[253]: Positive Trust Anchors: Jan 17 12:05:59.692994 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:05:59.693026 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:05:59.695195 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 17 12:05:59.696154 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:05:59.702905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:05:59.796545 kernel: SCSI subsystem initialized Jan 17 12:05:59.805524 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:05:59.814554 kernel: iscsi: registered transport (tcp) Jan 17 12:05:59.834530 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:05:59.834601 kernel: QLogic iSCSI HBA Driver Jan 17 12:05:59.877342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:05:59.895804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:05:59.928235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:05:59.928295 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:05:59.934464 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:05:59.983537 kernel: raid6: neonx8 gen() 15758 MB/s Jan 17 12:06:00.003523 kernel: raid6: neonx4 gen() 15660 MB/s Jan 17 12:06:00.023519 kernel: raid6: neonx2 gen() 13231 MB/s Jan 17 12:06:00.044520 kernel: raid6: neonx1 gen() 10488 MB/s Jan 17 12:06:00.064518 kernel: raid6: int64x8 gen() 6968 MB/s Jan 17 12:06:00.084519 kernel: raid6: int64x4 gen() 7346 MB/s Jan 17 12:06:00.105518 kernel: raid6: int64x2 gen() 6133 MB/s Jan 17 12:06:00.129159 kernel: raid6: int64x1 gen() 5062 MB/s Jan 17 12:06:00.129172 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Jan 17 12:06:00.152793 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Jan 17 12:06:00.152805 kernel: raid6: using neon recovery algorithm Jan 17 12:06:00.165649 kernel: xor: measuring software checksum speed Jan 17 12:06:00.165664 kernel: 8regs : 19807 MB/sec Jan 17 12:06:00.169231 kernel: 32regs : 19613 MB/sec Jan 17 12:06:00.176732 kernel: arm64_neon : 25464 MB/sec Jan 17 12:06:00.176743 kernel: xor: using function: arm64_neon (25464 MB/sec) Jan 17 12:06:00.227531 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:06:00.237831 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:00.254692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:00.278710 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 17 12:06:00.284274 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:00.309693 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:06:00.326470 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 17 12:06:00.353782 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:00.368772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:06:00.411108 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:00.436786 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:06:00.467094 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:00.477208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:00.500862 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:00.516150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:06:00.544542 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 12:06:00.544169 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:06:00.564237 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:00.629381 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 12:06:00.629408 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 12:06:00.629418 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 12:06:00.629427 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 12:06:00.629437 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 12:06:00.629446 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 12:06:00.629620 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 12:06:00.629632 kernel: scsi host0: storvsc_host_t Jan 17 12:06:00.629657 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 12:06:00.629667 kernel: scsi host1: storvsc_host_t Jan 17 12:06:00.584387 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:00.664346 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 12:06:00.664425 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 12:06:00.584686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:00.682617 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 17 12:06:00.671794 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:00.684902 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:00.712259 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: VF slot 1 added Jan 17 12:06:00.685213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:00.704955 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:00.731943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:00.759554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:00.794497 kernel: hv_vmbus: registering driver hv_pci Jan 17 12:06:00.794531 kernel: hv_pci eb75ca8c-f507-48dd-a32c-12608b94a950: PCI VMBus probing: Using version 0x10004 Jan 17 12:06:00.596500 kernel: PTP clock support registered Jan 17 12:06:00.603646 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 12:06:00.603663 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 12:06:00.603792 kernel: hv_pci eb75ca8c-f507-48dd-a32c-12608b94a950: PCI host bridge to bus f507:00 Jan 17 12:06:00.603882 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:06:00.603891 kernel: hv_vmbus: registering driver hv_utils Jan 17 12:06:00.603900 kernel: pci_bus f507:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 12:06:00.603994 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 12:06:00.604006 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 12:06:00.604013 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 12:06:00.604021 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 12:06:00.605452 kernel: pci_bus f507:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 12:06:00.605585 kernel: pci f507:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 12:06:00.605707 kernel: pci f507:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 12:06:00.605833 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 12:06:00.605931 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 12:06:00.606013 kernel: pci f507:00:02.0: enabling Extended Tags Jan 17 12:06:00.606137 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:06:00.606235 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 12:06:00.606316 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 12:06:00.606396 kernel: pci f507:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f507:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 12:06:00.606514 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:00.606524 kernel: pci_bus f507:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 12:06:00.607460 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:06:00.607565 kernel: pci f507:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 12:06:00.607668 systemd-journald[217]: Time jumped backwards, rotating. Jan 17 12:06:00.800228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:00.500122 systemd-resolved[253]: Clock change detected. Flushing caches. Jan 17 12:06:00.514735 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:00.646024 kernel: mlx5_core f507:00:02.0: enabling device (0000 -> 0002) Jan 17 12:06:00.867308 kernel: mlx5_core f507:00:02.0: firmware version: 16.30.1284 Jan 17 12:06:00.867445 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: VF registering: eth1 Jan 17 12:06:00.867547 kernel: mlx5_core f507:00:02.0 eth1: joined to eth0 Jan 17 12:06:00.867647 kernel: mlx5_core f507:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 12:06:00.876137 kernel: mlx5_core f507:00:02.0 enP62727s1: renamed from eth1 Jan 17 12:06:01.165228 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 12:06:01.223133 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (482) Jan 17 12:06:01.239377 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:06:01.270355 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 12:06:01.307127 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (486) Jan 17 12:06:01.321979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 12:06:01.329400 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 12:06:01.360381 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:06:02.395933 disk-uuid[595]: The operation has completed successfully. Jan 17 12:06:02.401405 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:02.455190 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:06:02.460296 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:06:02.494287 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:06:02.506892 sh[711]: Success Jan 17 12:06:02.549470 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:06:02.748116 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:06:02.769275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:06:02.778883 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:06:02.806563 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:06:02.806599 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:06:02.814850 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:06:02.820528 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:06:02.824862 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:06:03.225461 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:06:03.231968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:06:03.252399 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:06:03.260303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:06:03.301304 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:03.301367 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:06:03.306494 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:03.328150 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:06:03.337651 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:06:03.351281 kernel: BTRFS info (device sda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:03.358178 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:06:03.374344 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:06:03.427018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:03.454303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:06:03.480844 systemd-networkd[895]: lo: Link UP Jan 17 12:06:03.480858 systemd-networkd[895]: lo: Gained carrier Jan 17 12:06:03.482979 systemd-networkd[895]: Enumeration completed Jan 17 12:06:03.484448 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:06:03.487506 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:03.487510 systemd-networkd[895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:06:03.494640 systemd[1]: Reached target network.target - Network. Jan 17 12:06:03.553127 kernel: mlx5_core f507:00:02.0 enP62727s1: Link up Jan 17 12:06:03.594128 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: Data path switched to VF: enP62727s1 Jan 17 12:06:03.594466 systemd-networkd[895]: enP62727s1: Link UP Jan 17 12:06:03.594570 systemd-networkd[895]: eth0: Link UP Jan 17 12:06:03.594665 systemd-networkd[895]: eth0: Gained carrier Jan 17 12:06:03.594673 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:03.619990 systemd-networkd[895]: enP62727s1: Gained carrier Jan 17 12:06:03.634518 systemd-networkd[895]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 12:06:04.251945 ignition[830]: Ignition 2.19.0 Jan 17 12:06:04.251962 ignition[830]: Stage: fetch-offline Jan 17 12:06:04.256782 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:04.251996 ignition[830]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:04.252004 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:04.252097 ignition[830]: parsed url from cmdline: "" Jan 17 12:06:04.252126 ignition[830]: no config URL provided Jan 17 12:06:04.252131 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:06:04.252139 ignition[830]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:06:04.286277 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:06:04.252144 ignition[830]: failed to fetch config: resource requires networking Jan 17 12:06:04.252536 ignition[830]: Ignition finished successfully Jan 17 12:06:04.308738 ignition[905]: Ignition 2.19.0 Jan 17 12:06:04.308745 ignition[905]: Stage: fetch Jan 17 12:06:04.308942 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:04.308952 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:04.309052 ignition[905]: parsed url from cmdline: "" Jan 17 12:06:04.309055 ignition[905]: no config URL provided Jan 17 12:06:04.309059 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:06:04.309067 ignition[905]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:06:04.309087 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 12:06:04.406181 ignition[905]: GET result: OK Jan 17 12:06:04.406272 ignition[905]: config has been read from IMDS userdata Jan 17 12:06:04.406327 ignition[905]: parsing config with SHA512: 15c8bd8314a162f784fc0bb7a52d6888878c0ab0ab5f645f42d3277e37a7da51fcf2f84d6083d53693e889166337db80492e85f6b401acc16d7dcb67c2b054e1 Jan 17 12:06:04.410333 unknown[905]: fetched base config from "system" Jan 17 12:06:04.410710 ignition[905]: fetch: fetch complete Jan 17 12:06:04.410340 unknown[905]: fetched base config from "system" Jan 17 12:06:04.410715 ignition[905]: fetch: fetch passed Jan 17 12:06:04.410346 unknown[905]: fetched user config from "azure" Jan 17 12:06:04.410757 ignition[905]: Ignition finished successfully Jan 17 12:06:04.414507 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:06:04.441401 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:06:04.462475 ignition[912]: Ignition 2.19.0 Jan 17 12:06:04.467461 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:06:04.462481 ignition[912]: Stage: kargs Jan 17 12:06:04.462769 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:04.462779 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:04.495270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:06:04.465409 ignition[912]: kargs: kargs passed Jan 17 12:06:04.465471 ignition[912]: Ignition finished successfully Jan 17 12:06:04.513922 ignition[918]: Ignition 2.19.0 Jan 17 12:06:04.519204 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:06:04.513928 ignition[918]: Stage: disks Jan 17 12:06:04.527524 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:04.514093 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:04.539445 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:06:04.514129 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:04.549202 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:06:04.515031 ignition[918]: disks: disks passed Jan 17 12:06:04.560623 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:06:04.515076 ignition[918]: Ignition finished successfully Jan 17 12:06:04.570788 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:06:04.597434 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:06:04.666379 systemd-fsck[928]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 12:06:04.671878 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:06:04.692187 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:06:04.750160 kernel: EXT4-fs (sda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:06:04.750491 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:06:04.759571 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:06:04.799189 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:04.809392 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:06:04.828197 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) Jan 17 12:06:04.828475 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:06:04.866413 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:04.866443 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:06:04.866489 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:04.848017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:06:04.889315 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:06:04.848057 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:04.860645 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:06:04.896411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:06:04.907276 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:05.424201 systemd-networkd[895]: enP62727s1: Gained IPv6LL Jan 17 12:06:05.429312 coreos-metadata[941]: Jan 17 12:06:05.429 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:06:05.437378 coreos-metadata[941]: Jan 17 12:06:05.432 INFO Fetch successful Jan 17 12:06:05.437378 coreos-metadata[941]: Jan 17 12:06:05.432 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:06:05.454677 coreos-metadata[941]: Jan 17 12:06:05.449 INFO Fetch successful Jan 17 12:06:05.464264 coreos-metadata[941]: Jan 17 12:06:05.464 INFO wrote hostname ci-4081.3.0-a-c8756aff3b to /sysroot/etc/hostname Jan 17 12:06:05.472925 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:06:05.488246 systemd-networkd[895]: eth0: Gained IPv6LL Jan 17 12:06:05.957376 initrd-setup-root[968]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:06:06.003616 initrd-setup-root[975]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:06:06.012717 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:06:06.036653 initrd-setup-root[989]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:06:06.981991 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:06.997362 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:06:07.010539 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:06:07.028536 kernel: BTRFS info (device sda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:07.024753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:06:07.054081 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:06:07.068563 ignition[1058]: INFO : Ignition 2.19.0 Jan 17 12:06:07.068563 ignition[1058]: INFO : Stage: mount Jan 17 12:06:07.068563 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:07.068563 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:07.068563 ignition[1058]: INFO : mount: mount passed Jan 17 12:06:07.068563 ignition[1058]: INFO : Ignition finished successfully Jan 17 12:06:07.071549 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:06:07.089441 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:06:07.112361 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:07.156128 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1069) Jan 17 12:06:07.156184 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:07.166374 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:06:07.166410 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:07.173121 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:06:07.175153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:07.207265 ignition[1087]: INFO : Ignition 2.19.0 Jan 17 12:06:07.211618 ignition[1087]: INFO : Stage: files Jan 17 12:06:07.211618 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:07.211618 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:07.211618 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:06:07.234190 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:06:07.234190 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:06:07.316440 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:06:07.323966 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:06:07.323966 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:06:07.316953 unknown[1087]: wrote ssh authorized keys file for user: core Jan 17 12:06:07.344745 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:06:07.344745 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:06:07.437726 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:06:07.733128 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:06:07.733128 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 17 12:06:08.194048 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:06:08.403740 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:06:08.403740 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:06:08.459163 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: files passed Jan 17 12:06:08.472791 ignition[1087]: INFO : Ignition finished successfully Jan 17 12:06:08.490128 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:06:08.539544 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:06:08.561312 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:06:08.571075 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:06:08.571181 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:06:08.620372 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:08.613980 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:08.652186 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:08.652186 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:08.628473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:06:08.655275 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:06:08.695853 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:06:08.701187 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:06:08.709480 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:06:08.722718 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:06:08.734360 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:06:08.750359 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:06:08.774178 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:08.790406 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:06:08.812686 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:06:08.812799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:06:08.825982 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:08.838850 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:08.852575 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:06:08.865084 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:06:08.865215 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:08.883242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:06:08.895160 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:06:08.905452 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:06:08.916051 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:08.928390 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:08.940498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:06:08.951881 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:08.963892 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:06:08.976029 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:06:08.986875 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:06:08.997562 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:06:08.997642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:09.012677 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:09.018847 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:09.031288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:06:09.036645 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:09.043681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:06:09.043752 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:09.062512 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:06:09.062618 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:09.083146 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:06:09.083234 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:06:09.096616 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:06:09.096667 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:06:09.184791 ignition[1140]: INFO : Ignition 2.19.0 Jan 17 12:06:09.184791 ignition[1140]: INFO : Stage: umount Jan 17 12:06:09.184791 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:09.184791 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:09.184791 ignition[1140]: INFO : umount: umount passed Jan 17 12:06:09.184791 ignition[1140]: INFO : Ignition finished successfully Jan 17 12:06:09.136413 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:06:09.161640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:06:09.190203 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:06:09.190322 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:09.208407 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:06:09.208473 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:09.222004 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:06:09.222124 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:06:09.233419 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:06:09.233484 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:06:09.244274 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:06:09.244327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:06:09.255391 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:06:09.255437 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:06:09.267063 systemd[1]: Stopped target network.target - Network. Jan 17 12:06:09.277659 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:06:09.277733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:09.290252 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:06:09.300748 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:06:09.306640 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:09.318478 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:06:09.331526 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:06:09.341843 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:06:09.341901 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:06:09.353456 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:06:09.353521 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:06:09.365296 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:06:09.365368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:06:09.377032 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:06:09.377123 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:06:09.388001 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:06:09.400491 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:06:09.596456 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: Data path switched from VF: enP62727s1 Jan 17 12:06:09.410151 systemd-networkd[895]: eth0: DHCPv6 lease lost Jan 17 12:06:09.417622 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:06:09.418226 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:06:09.418342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:06:09.430459 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:06:09.430531 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:09.462338 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:06:09.467637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:06:09.467711 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:09.475306 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:09.496763 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:06:09.496869 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:06:09.525281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:06:09.525402 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:09.536833 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:06:09.536901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:09.548362 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:06:09.548424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:09.574770 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:06:09.574910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:09.588913 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:06:09.589055 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:09.602625 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:06:09.602675 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:09.612769 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:06:09.612837 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:09.628224 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:06:09.628293 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:06:09.645856 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:09.645940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:09.697399 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:06:09.716243 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:06:09.716362 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:09.733963 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:06:09.734022 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:09.749388 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:06:09.749442 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:09.764836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:09.764897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:09.780127 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:06:09.780267 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:06:09.794571 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:06:09.794659 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:06:09.933893 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:06:09.934054 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:06:09.943481 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:06:09.954275 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:06:09.954353 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:09.993376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:06:10.024635 systemd[1]: Switching root. Jan 17 12:06:10.100857 systemd-journald[217]: Journal stopped Jan 17 12:05:59.297438 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 17 12:05:59.297460 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 17 10:42:25 -00 2025 Jan 17 12:05:59.297468 kernel: KASLR enabled Jan 17 12:05:59.297474 kernel: earlycon: pl11 at MMIO 0x00000000effec000 (options '') Jan 17 12:05:59.297481 kernel: printk: bootconsole [pl11] enabled Jan 17 12:05:59.297487 kernel: efi: EFI v2.7 by EDK II Jan 17 12:05:59.297494 kernel: efi: ACPI 2.0=0x3fd5f018 SMBIOS=0x3e580000 SMBIOS 3.0=0x3e560000 MEMATTR=0x3ead8b98 RNG=0x3fd5f998 MEMRESERVE=0x3e44ee18 Jan 17 12:05:59.297500 kernel: random: crng init done Jan 17 12:05:59.297506 kernel: ACPI: Early table checksum verification disabled Jan 17 12:05:59.297537 kernel: ACPI: RSDP 0x000000003FD5F018 000024 (v02 VRTUAL) Jan 17 12:05:59.297543 kernel: ACPI: XSDT 0x000000003FD5FF18 00006C (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297549 kernel: ACPI: FACP 0x000000003FD5FC18 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297557 kernel: ACPI: DSDT 0x000000003FD41018 01DFCD (v02 MSFTVM DSDT01 00000001 INTL 20230628) Jan 17 12:05:59.297563 kernel: ACPI: DBG2 0x000000003FD5FB18 000072 (v00 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297570 kernel: ACPI: GTDT 0x000000003FD5FD98 000060 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297576 kernel: ACPI: OEM0 0x000000003FD5F098 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297583 kernel: ACPI: SPCR 0x000000003FD5FA98 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297591 kernel: ACPI: APIC 0x000000003FD5F818 0000FC (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297598 kernel: ACPI: SRAT 0x000000003FD5F198 000234 (v03 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297604 kernel: ACPI: PPTT 0x000000003FD5F418 000120 (v01 VRTUAL MICROSFT 00000000 MSFT 00000000) Jan 17 12:05:59.297610 kernel: ACPI: BGRT 0x000000003FD5FE98 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:05:59.297617 kernel: ACPI: SPCR: console: pl011,mmio32,0xeffec000,115200 Jan 17 12:05:59.297623 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] Jan 17 12:05:59.297630 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x1bfffffff] Jan 17 12:05:59.297636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1c0000000-0xfbfffffff] Jan 17 12:05:59.297642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] Jan 17 12:05:59.297649 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] Jan 17 12:05:59.297655 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] Jan 17 12:05:59.297663 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] Jan 17 12:05:59.297669 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] Jan 17 12:05:59.297676 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] Jan 17 12:05:59.297682 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] Jan 17 12:05:59.297689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] Jan 17 12:05:59.297695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] Jan 17 12:05:59.297701 kernel: NUMA: NODE_DATA [mem 0x1bf7ee800-0x1bf7f3fff] Jan 17 12:05:59.297707 kernel: Zone ranges: Jan 17 12:05:59.297714 kernel: DMA [mem 0x0000000000000000-0x00000000ffffffff] Jan 17 12:05:59.297720 kernel: DMA32 empty Jan 17 12:05:59.297727 kernel: Normal [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 12:05:59.297733 kernel: Movable zone start for each node Jan 17 12:05:59.297744 kernel: Early memory node ranges Jan 17 12:05:59.297751 kernel: node 0: [mem 0x0000000000000000-0x00000000007fffff] Jan 17 12:05:59.297757 kernel: node 0: [mem 0x0000000000824000-0x000000003e54ffff] Jan 17 12:05:59.297764 kernel: node 0: [mem 0x000000003e550000-0x000000003e87ffff] Jan 17 12:05:59.297771 kernel: node 0: [mem 0x000000003e880000-0x000000003fc7ffff] Jan 17 12:05:59.297779 kernel: node 0: [mem 0x000000003fc80000-0x000000003fcfffff] Jan 17 12:05:59.297785 kernel: node 0: [mem 0x000000003fd00000-0x000000003fffffff] Jan 17 12:05:59.297792 kernel: node 0: [mem 0x0000000100000000-0x00000001bfffffff] Jan 17 12:05:59.297799 kernel: Initmem setup node 0 [mem 0x0000000000000000-0x00000001bfffffff] Jan 17 12:05:59.297806 kernel: On node 0, zone DMA: 36 pages in unavailable ranges Jan 17 12:05:59.297812 kernel: psci: probing for conduit method from ACPI. Jan 17 12:05:59.297819 kernel: psci: PSCIv1.1 detected in firmware. Jan 17 12:05:59.297826 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 12:05:59.297832 kernel: psci: MIGRATE_INFO_TYPE not supported. Jan 17 12:05:59.297839 kernel: psci: SMC Calling Convention v1.4 Jan 17 12:05:59.297846 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 17 12:05:59.297852 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 17 12:05:59.297861 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jan 17 12:05:59.297867 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jan 17 12:05:59.297874 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 12:05:59.297881 kernel: Detected PIPT I-cache on CPU0 Jan 17 12:05:59.297888 kernel: CPU features: detected: GIC system register CPU interface Jan 17 12:05:59.297894 kernel: CPU features: detected: Hardware dirty bit management Jan 17 12:05:59.297901 kernel: CPU features: detected: Spectre-BHB Jan 17 12:05:59.297908 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 17 12:05:59.297914 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 17 12:05:59.297921 kernel: CPU features: detected: ARM erratum 1418040 Jan 17 12:05:59.297928 kernel: CPU features: detected: ARM erratum 1542419 (kernel portion) Jan 17 12:05:59.297936 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 17 12:05:59.297943 kernel: alternatives: applying boot alternatives Jan 17 12:05:59.297951 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:05:59.297958 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:05:59.297965 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:05:59.297972 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 12:05:59.297978 kernel: Fallback order for Node 0: 0 Jan 17 12:05:59.297985 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1032156 Jan 17 12:05:59.297992 kernel: Policy zone: Normal Jan 17 12:05:59.297999 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:05:59.298005 kernel: software IO TLB: area num 2. Jan 17 12:05:59.298014 kernel: software IO TLB: mapped [mem 0x000000003a44e000-0x000000003e44e000] (64MB) Jan 17 12:05:59.298021 kernel: Memory: 3982752K/4194160K available (10240K kernel code, 2184K rwdata, 8096K rodata, 39360K init, 897K bss, 211408K reserved, 0K cma-reserved) Jan 17 12:05:59.298028 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:05:59.298034 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:05:59.298042 kernel: rcu: RCU event tracing is enabled. Jan 17 12:05:59.298048 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:05:59.298055 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:05:59.298062 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:05:59.298069 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:05:59.298076 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:05:59.298082 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 12:05:59.298091 kernel: GICv3: 960 SPIs implemented Jan 17 12:05:59.298097 kernel: GICv3: 0 Extended SPIs implemented Jan 17 12:05:59.298104 kernel: Root IRQ handler: gic_handle_irq Jan 17 12:05:59.298111 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 17 12:05:59.298117 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000effee000 Jan 17 12:05:59.298124 kernel: ITS: No ITS available, not enabling LPIs Jan 17 12:05:59.298131 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:05:59.298138 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:05:59.298145 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 17 12:05:59.298151 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 17 12:05:59.298158 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 17 12:05:59.298166 kernel: Console: colour dummy device 80x25 Jan 17 12:05:59.298174 kernel: printk: console [tty1] enabled Jan 17 12:05:59.298181 kernel: ACPI: Core revision 20230628 Jan 17 12:05:59.298188 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 17 12:05:59.298195 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:05:59.298202 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:05:59.298209 kernel: landlock: Up and running. Jan 17 12:05:59.298215 kernel: SELinux: Initializing. Jan 17 12:05:59.298222 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:05:59.298229 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 12:05:59.298238 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:05:59.298245 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:05:59.298253 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3a8030, hints 0xe, misc 0x31e1 Jan 17 12:05:59.298259 kernel: Hyper-V: Host Build 10.0.22477.1594-1-0 Jan 17 12:05:59.298266 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 12:05:59.298273 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:05:59.298280 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:05:59.298294 kernel: Remapping and enabling EFI services. Jan 17 12:05:59.298301 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:05:59.298308 kernel: Detected PIPT I-cache on CPU1 Jan 17 12:05:59.298316 kernel: GICv3: CPU1: found redistributor 1 region 1:0x00000000f000e000 Jan 17 12:05:59.298325 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 17 12:05:59.298332 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 17 12:05:59.298339 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:05:59.298346 kernel: SMP: Total of 2 processors activated. Jan 17 12:05:59.298353 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 12:05:59.298363 kernel: CPU features: detected: Instruction cache invalidation not required for I/D coherence Jan 17 12:05:59.298370 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 17 12:05:59.298377 kernel: CPU features: detected: CRC32 instructions Jan 17 12:05:59.298385 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 17 12:05:59.298392 kernel: CPU features: detected: LSE atomic instructions Jan 17 12:05:59.298399 kernel: CPU features: detected: Privileged Access Never Jan 17 12:05:59.298406 kernel: CPU: All CPU(s) started at EL1 Jan 17 12:05:59.298413 kernel: alternatives: applying system-wide alternatives Jan 17 12:05:59.298420 kernel: devtmpfs: initialized Jan 17 12:05:59.298429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:05:59.298437 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:05:59.298444 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:05:59.298451 kernel: SMBIOS 3.1.0 present. Jan 17 12:05:59.298458 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 09/28/2024 Jan 17 12:05:59.298466 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:05:59.298473 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 12:05:59.298480 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 12:05:59.298488 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 12:05:59.298496 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:05:59.298504 kernel: audit: type=2000 audit(0.047:1): state=initialized audit_enabled=0 res=1 Jan 17 12:05:59.298521 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:05:59.298529 kernel: cpuidle: using governor menu Jan 17 12:05:59.298536 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 12:05:59.298543 kernel: ASID allocator initialised with 32768 entries Jan 17 12:05:59.298551 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:05:59.298558 kernel: Serial: AMBA PL011 UART driver Jan 17 12:05:59.298566 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 17 12:05:59.298574 kernel: Modules: 0 pages in range for non-PLT usage Jan 17 12:05:59.298582 kernel: Modules: 509040 pages in range for PLT usage Jan 17 12:05:59.298589 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:05:59.298596 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:05:59.298604 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 12:05:59.298611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 12:05:59.298618 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:05:59.298625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:05:59.298633 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 12:05:59.298641 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 12:05:59.298649 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:05:59.298656 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:05:59.298663 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:05:59.298670 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:05:59.298678 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:05:59.298685 kernel: ACPI: Interpreter enabled Jan 17 12:05:59.298692 kernel: ACPI: Using GIC for interrupt routing Jan 17 12:05:59.298699 kernel: ARMH0011:00: ttyAMA0 at MMIO 0xeffec000 (irq = 12, base_baud = 0) is a SBSA Jan 17 12:05:59.298708 kernel: printk: console [ttyAMA0] enabled Jan 17 12:05:59.298715 kernel: printk: bootconsole [pl11] disabled Jan 17 12:05:59.298722 kernel: ARMH0011:01: ttyAMA1 at MMIO 0xeffeb000 (irq = 13, base_baud = 0) is a SBSA Jan 17 12:05:59.298730 kernel: iommu: Default domain type: Translated Jan 17 12:05:59.298737 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 12:05:59.298744 kernel: efivars: Registered efivars operations Jan 17 12:05:59.298752 kernel: vgaarb: loaded Jan 17 12:05:59.298759 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 12:05:59.298766 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:05:59.298775 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:05:59.298782 kernel: pnp: PnP ACPI init Jan 17 12:05:59.298790 kernel: pnp: PnP ACPI: found 0 devices Jan 17 12:05:59.298797 kernel: NET: Registered PF_INET protocol family Jan 17 12:05:59.298804 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:05:59.298812 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 12:05:59.298819 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:05:59.298826 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 12:05:59.298834 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 12:05:59.298843 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 12:05:59.298850 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:05:59.298858 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 12:05:59.298865 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:05:59.298872 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:05:59.298879 kernel: kvm [1]: HYP mode not available Jan 17 12:05:59.298887 kernel: Initialise system trusted keyrings Jan 17 12:05:59.298894 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 12:05:59.298901 kernel: Key type asymmetric registered Jan 17 12:05:59.298910 kernel: Asymmetric key parser 'x509' registered Jan 17 12:05:59.298917 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 12:05:59.298924 kernel: io scheduler mq-deadline registered Jan 17 12:05:59.298932 kernel: io scheduler kyber registered Jan 17 12:05:59.298939 kernel: io scheduler bfq registered Jan 17 12:05:59.298946 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:05:59.298953 kernel: thunder_xcv, ver 1.0 Jan 17 12:05:59.298960 kernel: thunder_bgx, ver 1.0 Jan 17 12:05:59.298968 kernel: nicpf, ver 1.0 Jan 17 12:05:59.298975 kernel: nicvf, ver 1.0 Jan 17 12:05:59.299110 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 12:05:59.299183 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-01-17T12:05:58 UTC (1737115558) Jan 17 12:05:59.299193 kernel: efifb: probing for efifb Jan 17 12:05:59.299201 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 12:05:59.299209 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 12:05:59.299216 kernel: efifb: scrolling: redraw Jan 17 12:05:59.299223 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 12:05:59.299233 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:05:59.299240 kernel: fb0: EFI VGA frame buffer device Jan 17 12:05:59.299248 kernel: SMCCC: SOC_ID: ARCH_SOC_ID not implemented, skipping .... Jan 17 12:05:59.299255 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:05:59.299262 kernel: No ACPI PMU IRQ for CPU0 Jan 17 12:05:59.299269 kernel: No ACPI PMU IRQ for CPU1 Jan 17 12:05:59.299276 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 1 counters available Jan 17 12:05:59.299284 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 12:05:59.299291 kernel: watchdog: Hard watchdog permanently disabled Jan 17 12:05:59.299300 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:05:59.299307 kernel: Segment Routing with IPv6 Jan 17 12:05:59.299314 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:05:59.299321 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:05:59.299328 kernel: Key type dns_resolver registered Jan 17 12:05:59.299335 kernel: registered taskstats version 1 Jan 17 12:05:59.299343 kernel: Loading compiled-in X.509 certificates Jan 17 12:05:59.299350 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e5b890cba32c3e1c766d9a9b821ee4d2154ffee7' Jan 17 12:05:59.299357 kernel: Key type .fscrypt registered Jan 17 12:05:59.299366 kernel: Key type fscrypt-provisioning registered Jan 17 12:05:59.299373 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:05:59.299381 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:05:59.299388 kernel: ima: No architecture policies found Jan 17 12:05:59.299395 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 12:05:59.299402 kernel: clk: Disabling unused clocks Jan 17 12:05:59.299410 kernel: Freeing unused kernel memory: 39360K Jan 17 12:05:59.299417 kernel: Run /init as init process Jan 17 12:05:59.299424 kernel: with arguments: Jan 17 12:05:59.299432 kernel: /init Jan 17 12:05:59.299440 kernel: with environment: Jan 17 12:05:59.299447 kernel: HOME=/ Jan 17 12:05:59.299454 kernel: TERM=linux Jan 17 12:05:59.299461 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:05:59.299470 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:05:59.299479 systemd[1]: Detected virtualization microsoft. Jan 17 12:05:59.299487 systemd[1]: Detected architecture arm64. Jan 17 12:05:59.299496 systemd[1]: Running in initrd. Jan 17 12:05:59.299504 systemd[1]: No hostname configured, using default hostname. Jan 17 12:05:59.299522 systemd[1]: Hostname set to . Jan 17 12:05:59.299530 systemd[1]: Initializing machine ID from random generator. Jan 17 12:05:59.299538 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:05:59.299546 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:05:59.299554 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:05:59.299562 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:05:59.299572 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:05:59.299580 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:05:59.299588 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:05:59.299597 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:05:59.299605 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:05:59.299613 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:05:59.299621 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:05:59.299631 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:05:59.299638 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:05:59.299646 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:05:59.299654 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:05:59.299662 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:05:59.299670 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:05:59.299678 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:05:59.299686 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:05:59.299695 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:05:59.299703 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:05:59.299711 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:05:59.299719 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:05:59.299726 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:05:59.299734 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:05:59.299742 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:05:59.299750 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:05:59.299758 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:05:59.299767 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:05:59.299790 systemd-journald[217]: Collecting audit messages is disabled. Jan 17 12:05:59.299810 systemd-journald[217]: Journal started Jan 17 12:05:59.299830 systemd-journald[217]: Runtime Journal (/run/log/journal/9effeb55e68142d5b3f38750d7425937) is 8.0M, max 78.5M, 70.5M free. Jan 17 12:05:59.304595 systemd-modules-load[218]: Inserted module 'overlay' Jan 17 12:05:59.326529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:05:59.331280 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 17 12:05:59.343361 kernel: Bridge firewalling registered Jan 17 12:05:59.343386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:05:59.363198 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:05:59.363885 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:05:59.370351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:05:59.390608 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:05:59.396535 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:05:59.410535 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:05:59.437163 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:05:59.447732 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:05:59.465732 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:05:59.496671 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:05:59.512678 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:05:59.523873 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:05:59.537200 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:05:59.549833 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:05:59.576084 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:05:59.591180 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:05:59.612218 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:05:59.638930 dracut-cmdline[249]: dracut-dracut-053 Jan 17 12:05:59.638930 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyAMA0,115200n8 earlycon=pl011,0xeffec000 flatcar.first_boot=detected acpi=force flatcar.oem.id=azure flatcar.autologin verity.usrhash=1dec90e7382e4708d8bb0385f9465c79a53a2c2baf70ef34aed11855f47d17b3 Jan 17 12:05:59.637785 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:05:59.692983 systemd-resolved[253]: Positive Trust Anchors: Jan 17 12:05:59.692994 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:05:59.693026 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:05:59.695195 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 17 12:05:59.696154 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:05:59.702905 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:05:59.796545 kernel: SCSI subsystem initialized Jan 17 12:05:59.805524 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:05:59.814554 kernel: iscsi: registered transport (tcp) Jan 17 12:05:59.834530 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:05:59.834601 kernel: QLogic iSCSI HBA Driver Jan 17 12:05:59.877342 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:05:59.895804 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:05:59.928235 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:05:59.928295 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:05:59.934464 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:05:59.983537 kernel: raid6: neonx8 gen() 15758 MB/s Jan 17 12:06:00.003523 kernel: raid6: neonx4 gen() 15660 MB/s Jan 17 12:06:00.023519 kernel: raid6: neonx2 gen() 13231 MB/s Jan 17 12:06:00.044520 kernel: raid6: neonx1 gen() 10488 MB/s Jan 17 12:06:00.064518 kernel: raid6: int64x8 gen() 6968 MB/s Jan 17 12:06:00.084519 kernel: raid6: int64x4 gen() 7346 MB/s Jan 17 12:06:00.105518 kernel: raid6: int64x2 gen() 6133 MB/s Jan 17 12:06:00.129159 kernel: raid6: int64x1 gen() 5062 MB/s Jan 17 12:06:00.129172 kernel: raid6: using algorithm neonx8 gen() 15758 MB/s Jan 17 12:06:00.152793 kernel: raid6: .... xor() 11928 MB/s, rmw enabled Jan 17 12:06:00.152805 kernel: raid6: using neon recovery algorithm Jan 17 12:06:00.165649 kernel: xor: measuring software checksum speed Jan 17 12:06:00.165664 kernel: 8regs : 19807 MB/sec Jan 17 12:06:00.169231 kernel: 32regs : 19613 MB/sec Jan 17 12:06:00.176732 kernel: arm64_neon : 25464 MB/sec Jan 17 12:06:00.176743 kernel: xor: using function: arm64_neon (25464 MB/sec) Jan 17 12:06:00.227531 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:06:00.237831 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:00.254692 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:00.278710 systemd-udevd[436]: Using default interface naming scheme 'v255'. Jan 17 12:06:00.284274 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:00.309693 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:06:00.326470 dracut-pre-trigger[447]: rd.md=0: removing MD RAID activation Jan 17 12:06:00.353782 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:00.368772 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:06:00.411108 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:00.436786 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:06:00.467094 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:00.477208 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:00.500862 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:00.516150 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:06:00.544542 kernel: hv_vmbus: Vmbus version:5.3 Jan 17 12:06:00.544169 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:06:00.564237 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:00.629381 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 12:06:00.629408 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 12:06:00.629418 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 12:06:00.629427 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 17 12:06:00.629437 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 12:06:00.629446 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 12:06:00.629620 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 12:06:00.629632 kernel: scsi host0: storvsc_host_t Jan 17 12:06:00.629657 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 12:06:00.629667 kernel: scsi host1: storvsc_host_t Jan 17 12:06:00.584387 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:00.664346 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 12:06:00.664425 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/MSFT1000:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 17 12:06:00.584686 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:00.682617 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 17 12:06:00.671794 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:00.684902 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:00.712259 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: VF slot 1 added Jan 17 12:06:00.685213 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:00.704955 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:00.731943 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:00.759554 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:00.794497 kernel: hv_vmbus: registering driver hv_pci Jan 17 12:06:00.794531 kernel: hv_pci eb75ca8c-f507-48dd-a32c-12608b94a950: PCI VMBus probing: Using version 0x10004 Jan 17 12:06:00.596500 kernel: PTP clock support registered Jan 17 12:06:00.603646 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 12:06:00.603663 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 12:06:00.603792 kernel: hv_pci eb75ca8c-f507-48dd-a32c-12608b94a950: PCI host bridge to bus f507:00 Jan 17 12:06:00.603882 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:06:00.603891 kernel: hv_vmbus: registering driver hv_utils Jan 17 12:06:00.603900 kernel: pci_bus f507:00: root bus resource [mem 0xfc0000000-0xfc00fffff window] Jan 17 12:06:00.603994 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 12:06:00.604006 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 12:06:00.604013 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 12:06:00.604021 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 12:06:00.605452 kernel: pci_bus f507:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 12:06:00.605585 kernel: pci f507:00:02.0: [15b3:1018] type 00 class 0x020000 Jan 17 12:06:00.605707 kernel: pci f507:00:02.0: reg 0x10: [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 12:06:00.605833 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 12:06:00.605931 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 12:06:00.606013 kernel: pci f507:00:02.0: enabling Extended Tags Jan 17 12:06:00.606137 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:06:00.606235 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 12:06:00.606316 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 12:06:00.606396 kernel: pci f507:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at f507:00:02.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link) Jan 17 12:06:00.606514 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:00.606524 kernel: pci_bus f507:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 12:06:00.607460 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:06:00.607565 kernel: pci f507:00:02.0: BAR 0: assigned [mem 0xfc0000000-0xfc00fffff 64bit pref] Jan 17 12:06:00.607668 systemd-journald[217]: Time jumped backwards, rotating. Jan 17 12:06:00.800228 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:06:00.500122 systemd-resolved[253]: Clock change detected. Flushing caches. Jan 17 12:06:00.514735 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:00.646024 kernel: mlx5_core f507:00:02.0: enabling device (0000 -> 0002) Jan 17 12:06:00.867308 kernel: mlx5_core f507:00:02.0: firmware version: 16.30.1284 Jan 17 12:06:00.867445 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: VF registering: eth1 Jan 17 12:06:00.867547 kernel: mlx5_core f507:00:02.0 eth1: joined to eth0 Jan 17 12:06:00.867647 kernel: mlx5_core f507:00:02.0: MLX5E: StrdRq(1) RqSz(8) StrdSz(2048) RxCqeCmprss(0 basic) Jan 17 12:06:00.876137 kernel: mlx5_core f507:00:02.0 enP62727s1: renamed from eth1 Jan 17 12:06:01.165228 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 12:06:01.223133 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (482) Jan 17 12:06:01.239377 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:06:01.270355 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 12:06:01.307127 kernel: BTRFS: device fsid 8c8354db-e4b6-4022-87e4-d06cc74d2d9f devid 1 transid 40 /dev/sda3 scanned by (udev-worker) (486) Jan 17 12:06:01.321979 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 12:06:01.329400 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 12:06:01.360381 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:06:02.395933 disk-uuid[595]: The operation has completed successfully. Jan 17 12:06:02.401405 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:06:02.455190 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:06:02.460296 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:06:02.494287 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:06:02.506892 sh[711]: Success Jan 17 12:06:02.549470 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 12:06:02.748116 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:06:02.769275 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:06:02.778883 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:06:02.806563 kernel: BTRFS info (device dm-0): first mount of filesystem 8c8354db-e4b6-4022-87e4-d06cc74d2d9f Jan 17 12:06:02.806599 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:06:02.814850 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:06:02.820528 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:06:02.824862 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:06:03.225461 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:06:03.231968 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:06:03.252399 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:06:03.260303 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:06:03.301304 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:03.301367 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:06:03.306494 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:03.328150 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:06:03.337651 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:06:03.351281 kernel: BTRFS info (device sda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:03.358178 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:06:03.374344 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:06:03.427018 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:03.454303 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:06:03.480844 systemd-networkd[895]: lo: Link UP Jan 17 12:06:03.480858 systemd-networkd[895]: lo: Gained carrier Jan 17 12:06:03.482979 systemd-networkd[895]: Enumeration completed Jan 17 12:06:03.484448 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:06:03.487506 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:03.487510 systemd-networkd[895]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:06:03.494640 systemd[1]: Reached target network.target - Network. Jan 17 12:06:03.553127 kernel: mlx5_core f507:00:02.0 enP62727s1: Link up Jan 17 12:06:03.594128 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: Data path switched to VF: enP62727s1 Jan 17 12:06:03.594466 systemd-networkd[895]: enP62727s1: Link UP Jan 17 12:06:03.594570 systemd-networkd[895]: eth0: Link UP Jan 17 12:06:03.594665 systemd-networkd[895]: eth0: Gained carrier Jan 17 12:06:03.594673 systemd-networkd[895]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:03.619990 systemd-networkd[895]: enP62727s1: Gained carrier Jan 17 12:06:03.634518 systemd-networkd[895]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 12:06:04.251945 ignition[830]: Ignition 2.19.0 Jan 17 12:06:04.251962 ignition[830]: Stage: fetch-offline Jan 17 12:06:04.256782 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:04.251996 ignition[830]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:04.252004 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:04.252097 ignition[830]: parsed url from cmdline: "" Jan 17 12:06:04.252126 ignition[830]: no config URL provided Jan 17 12:06:04.252131 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:06:04.252139 ignition[830]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:06:04.286277 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:06:04.252144 ignition[830]: failed to fetch config: resource requires networking Jan 17 12:06:04.252536 ignition[830]: Ignition finished successfully Jan 17 12:06:04.308738 ignition[905]: Ignition 2.19.0 Jan 17 12:06:04.308745 ignition[905]: Stage: fetch Jan 17 12:06:04.308942 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:04.308952 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:04.309052 ignition[905]: parsed url from cmdline: "" Jan 17 12:06:04.309055 ignition[905]: no config URL provided Jan 17 12:06:04.309059 ignition[905]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:06:04.309067 ignition[905]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:06:04.309087 ignition[905]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 12:06:04.406181 ignition[905]: GET result: OK Jan 17 12:06:04.406272 ignition[905]: config has been read from IMDS userdata Jan 17 12:06:04.406327 ignition[905]: parsing config with SHA512: 15c8bd8314a162f784fc0bb7a52d6888878c0ab0ab5f645f42d3277e37a7da51fcf2f84d6083d53693e889166337db80492e85f6b401acc16d7dcb67c2b054e1 Jan 17 12:06:04.410333 unknown[905]: fetched base config from "system" Jan 17 12:06:04.410710 ignition[905]: fetch: fetch complete Jan 17 12:06:04.410340 unknown[905]: fetched base config from "system" Jan 17 12:06:04.410715 ignition[905]: fetch: fetch passed Jan 17 12:06:04.410346 unknown[905]: fetched user config from "azure" Jan 17 12:06:04.410757 ignition[905]: Ignition finished successfully Jan 17 12:06:04.414507 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:06:04.441401 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:06:04.462475 ignition[912]: Ignition 2.19.0 Jan 17 12:06:04.467461 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:06:04.462481 ignition[912]: Stage: kargs Jan 17 12:06:04.462769 ignition[912]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:04.462779 ignition[912]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:04.495270 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:06:04.465409 ignition[912]: kargs: kargs passed Jan 17 12:06:04.465471 ignition[912]: Ignition finished successfully Jan 17 12:06:04.513922 ignition[918]: Ignition 2.19.0 Jan 17 12:06:04.519204 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:06:04.513928 ignition[918]: Stage: disks Jan 17 12:06:04.527524 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:04.514093 ignition[918]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:04.539445 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:06:04.514129 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:04.549202 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:06:04.515031 ignition[918]: disks: disks passed Jan 17 12:06:04.560623 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:06:04.515076 ignition[918]: Ignition finished successfully Jan 17 12:06:04.570788 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:06:04.597434 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:06:04.666379 systemd-fsck[928]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 12:06:04.671878 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:06:04.692187 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:06:04.750160 kernel: EXT4-fs (sda9): mounted filesystem 5d516319-3144-49e6-9760-d0f29faba535 r/w with ordered data mode. Quota mode: none. Jan 17 12:06:04.750491 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:06:04.759571 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:06:04.799189 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:04.809392 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:06:04.828197 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (939) Jan 17 12:06:04.828475 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:06:04.866413 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:04.866443 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:06:04.866489 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:04.848017 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:06:04.889315 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:06:04.848057 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:04.860645 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:06:04.896411 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:06:04.907276 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:05.424201 systemd-networkd[895]: enP62727s1: Gained IPv6LL Jan 17 12:06:05.429312 coreos-metadata[941]: Jan 17 12:06:05.429 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:06:05.437378 coreos-metadata[941]: Jan 17 12:06:05.432 INFO Fetch successful Jan 17 12:06:05.437378 coreos-metadata[941]: Jan 17 12:06:05.432 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:06:05.454677 coreos-metadata[941]: Jan 17 12:06:05.449 INFO Fetch successful Jan 17 12:06:05.464264 coreos-metadata[941]: Jan 17 12:06:05.464 INFO wrote hostname ci-4081.3.0-a-c8756aff3b to /sysroot/etc/hostname Jan 17 12:06:05.472925 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:06:05.488246 systemd-networkd[895]: eth0: Gained IPv6LL Jan 17 12:06:05.957376 initrd-setup-root[968]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:06:06.003616 initrd-setup-root[975]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:06:06.012717 initrd-setup-root[982]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:06:06.036653 initrd-setup-root[989]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:06:06.981991 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:06.997362 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:06:07.010539 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:06:07.028536 kernel: BTRFS info (device sda6): last unmount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:07.024753 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:06:07.054081 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:06:07.068563 ignition[1058]: INFO : Ignition 2.19.0 Jan 17 12:06:07.068563 ignition[1058]: INFO : Stage: mount Jan 17 12:06:07.068563 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:07.068563 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:07.068563 ignition[1058]: INFO : mount: mount passed Jan 17 12:06:07.068563 ignition[1058]: INFO : Ignition finished successfully Jan 17 12:06:07.071549 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:06:07.089441 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:06:07.112361 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:06:07.156128 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1069) Jan 17 12:06:07.156184 kernel: BTRFS info (device sda6): first mount of filesystem 5a5108d6-bc75-4f85-aab0-f326070fd0b5 Jan 17 12:06:07.166374 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 17 12:06:07.166410 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:06:07.173121 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:06:07.175153 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:06:07.207265 ignition[1087]: INFO : Ignition 2.19.0 Jan 17 12:06:07.211618 ignition[1087]: INFO : Stage: files Jan 17 12:06:07.211618 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:07.211618 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:07.211618 ignition[1087]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:06:07.234190 ignition[1087]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:06:07.234190 ignition[1087]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:06:07.316440 ignition[1087]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:06:07.323966 ignition[1087]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:06:07.323966 ignition[1087]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:06:07.316953 unknown[1087]: wrote ssh authorized keys file for user: core Jan 17 12:06:07.344745 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:06:07.344745 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jan 17 12:06:07.437726 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 12:06:07.733128 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jan 17 12:06:07.733128 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:06:07.755020 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jan 17 12:06:08.194048 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 12:06:08.403740 ignition[1087]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jan 17 12:06:08.403740 ignition[1087]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 12:06:08.459163 ignition[1087]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:06:08.472791 ignition[1087]: INFO : files: files passed Jan 17 12:06:08.472791 ignition[1087]: INFO : Ignition finished successfully Jan 17 12:06:08.490128 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:06:08.539544 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:06:08.561312 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:06:08.571075 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:06:08.571181 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:06:08.620372 initrd-setup-root-after-ignition[1119]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:08.613980 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:08.652186 initrd-setup-root-after-ignition[1115]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:08.652186 initrd-setup-root-after-ignition[1115]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:06:08.628473 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:06:08.655275 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:06:08.695853 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:06:08.701187 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:06:08.709480 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:06:08.722718 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:06:08.734360 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:06:08.750359 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:06:08.774178 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:08.790406 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:06:08.812686 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:06:08.812799 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:06:08.825982 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:08.838850 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:08.852575 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:06:08.865084 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:06:08.865215 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:06:08.883242 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:06:08.895160 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:06:08.905452 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:06:08.916051 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:06:08.928390 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:06:08.940498 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:06:08.951881 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:06:08.963892 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:06:08.976029 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:06:08.986875 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:06:08.997562 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:06:08.997642 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:06:09.012677 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:09.018847 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:09.031288 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:06:09.036645 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:09.043681 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:06:09.043752 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:06:09.062512 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:06:09.062618 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:06:09.083146 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:06:09.083234 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:06:09.096616 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:06:09.096667 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:06:09.184791 ignition[1140]: INFO : Ignition 2.19.0 Jan 17 12:06:09.184791 ignition[1140]: INFO : Stage: umount Jan 17 12:06:09.184791 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:06:09.184791 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:06:09.184791 ignition[1140]: INFO : umount: umount passed Jan 17 12:06:09.184791 ignition[1140]: INFO : Ignition finished successfully Jan 17 12:06:09.136413 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:06:09.161640 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:06:09.190203 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:06:09.190322 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:09.208407 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:06:09.208473 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:06:09.222004 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:06:09.222124 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:06:09.233419 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:06:09.233484 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:06:09.244274 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:06:09.244327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:06:09.255391 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:06:09.255437 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:06:09.267063 systemd[1]: Stopped target network.target - Network. Jan 17 12:06:09.277659 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:06:09.277733 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:06:09.290252 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:06:09.300748 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:06:09.306640 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:09.318478 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:06:09.331526 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:06:09.341843 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:06:09.341901 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:06:09.353456 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:06:09.353521 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:06:09.365296 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:06:09.365368 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:06:09.377032 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:06:09.377123 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:06:09.388001 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:06:09.400491 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:06:09.596456 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: Data path switched from VF: enP62727s1 Jan 17 12:06:09.410151 systemd-networkd[895]: eth0: DHCPv6 lease lost Jan 17 12:06:09.417622 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:06:09.418226 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:06:09.418342 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:06:09.430459 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:06:09.430531 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:09.462338 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:06:09.467637 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:06:09.467711 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:06:09.475306 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:09.496763 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:06:09.496869 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:06:09.525281 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:06:09.525402 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:09.536833 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:06:09.536901 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:09.548362 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:06:09.548424 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:09.574770 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:06:09.574910 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:09.588913 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:06:09.589055 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:09.602625 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:06:09.602675 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:09.612769 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:06:09.612837 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:06:09.628224 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:06:09.628293 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:06:09.645856 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:06:09.645940 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:06:09.697399 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:06:09.716243 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:06:09.716362 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:09.733963 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 12:06:09.734022 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:09.749388 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:06:09.749442 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:09.764836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:06:09.764897 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:09.780127 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:06:09.780267 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:06:09.794571 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:06:09.794659 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:06:09.933893 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:06:09.934054 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:06:09.943481 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:06:09.954275 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:06:09.954353 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:06:09.993376 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:06:10.024635 systemd[1]: Switching root. Jan 17 12:06:10.100857 systemd-journald[217]: Journal stopped Jan 17 12:06:14.814196 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 17 12:06:14.814234 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:06:14.814249 kernel: SELinux: policy capability open_perms=1 Jan 17 12:06:14.814261 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:06:14.814269 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:06:14.814277 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:06:14.814286 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:06:14.814294 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:06:14.814302 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:06:14.814310 kernel: audit: type=1403 audit(1737115571.220:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:06:14.814321 systemd[1]: Successfully loaded SELinux policy in 230.929ms. Jan 17 12:06:14.814331 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.557ms. Jan 17 12:06:14.814341 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:06:14.814350 systemd[1]: Detected virtualization microsoft. Jan 17 12:06:14.814360 systemd[1]: Detected architecture arm64. Jan 17 12:06:14.814370 systemd[1]: Detected first boot. Jan 17 12:06:14.814379 systemd[1]: Hostname set to . Jan 17 12:06:14.814388 systemd[1]: Initializing machine ID from random generator. Jan 17 12:06:14.814397 zram_generator::config[1180]: No configuration found. Jan 17 12:06:14.814408 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:06:14.814417 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 12:06:14.814427 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 12:06:14.814443 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 12:06:14.814458 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:06:14.814467 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:06:14.814477 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:06:14.814487 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:06:14.814496 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:06:14.814508 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:06:14.814517 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:06:14.814527 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:06:14.814536 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:06:14.814545 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:06:14.814555 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:06:14.814564 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:06:14.814573 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:06:14.814583 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:06:14.814594 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 17 12:06:14.814603 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:06:14.814613 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 12:06:14.814625 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 12:06:14.814634 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 12:06:14.814644 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:06:14.814654 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:06:14.814667 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:06:14.814676 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:06:14.814686 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:06:14.814696 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:06:14.814705 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:06:14.814715 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:06:14.814724 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:06:14.814736 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:06:14.814746 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:06:14.814755 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:06:14.814765 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:06:14.814775 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:06:14.814784 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:06:14.814796 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:06:14.814806 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:06:14.814816 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:06:14.814826 systemd[1]: Reached target machines.target - Containers. Jan 17 12:06:14.814835 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:06:14.814845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:06:14.814855 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:06:14.814864 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:06:14.814878 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:06:14.814888 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:06:14.814897 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:06:14.814907 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:06:14.814916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:06:14.814926 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:06:14.814936 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 12:06:14.814946 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 12:06:14.814956 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 12:06:14.814967 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 12:06:14.814976 kernel: loop: module loaded Jan 17 12:06:14.814985 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:06:14.814995 kernel: fuse: init (API version 7.39) Jan 17 12:06:14.815004 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:06:14.815013 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:06:14.815053 systemd-journald[1277]: Collecting audit messages is disabled. Jan 17 12:06:14.815076 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:06:14.815087 systemd-journald[1277]: Journal started Jan 17 12:06:14.815130 systemd-journald[1277]: Runtime Journal (/run/log/journal/76b30f46b70e469f8fdaf6b2a12c9a82) is 8.0M, max 78.5M, 70.5M free. Jan 17 12:06:13.718119 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:06:13.875086 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:06:13.875482 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 12:06:13.875800 systemd[1]: systemd-journald.service: Consumed 3.368s CPU time. Jan 17 12:06:14.830119 kernel: ACPI: bus type drm_connector registered Jan 17 12:06:14.830166 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:06:14.852788 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 12:06:14.852854 systemd[1]: Stopped verity-setup.service. Jan 17 12:06:14.872752 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:06:14.873817 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:06:14.880421 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:06:14.887504 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:06:14.893296 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:06:14.899653 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:06:14.906543 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:06:14.912289 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:06:14.919795 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:06:14.927380 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:06:14.927517 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:06:14.934947 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:06:14.936042 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:06:14.943182 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:06:14.943351 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:06:14.950123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:06:14.950267 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:06:14.957741 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:06:14.957889 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:06:14.965710 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:06:14.965860 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:06:14.972706 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:06:14.979607 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:06:14.989130 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:06:14.998206 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:06:15.014428 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:06:15.031209 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:06:15.039086 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:06:15.045933 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:06:15.045978 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:06:15.052927 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:06:15.061432 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:06:15.069805 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:06:15.075573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:15.107354 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:06:15.114502 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:06:15.121185 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:06:15.124371 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:06:15.132618 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:06:15.138582 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:06:15.147315 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:06:15.160317 systemd-journald[1277]: Time spent on flushing to /var/log/journal/76b30f46b70e469f8fdaf6b2a12c9a82 is 72.393ms for 892 entries. Jan 17 12:06:15.160317 systemd-journald[1277]: System Journal (/var/log/journal/76b30f46b70e469f8fdaf6b2a12c9a82) is 11.8M, max 2.6G, 2.6G free. Jan 17 12:06:15.294636 systemd-journald[1277]: Received client request to flush runtime journal. Jan 17 12:06:15.294672 systemd-journald[1277]: /var/log/journal/76b30f46b70e469f8fdaf6b2a12c9a82/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Jan 17 12:06:15.294697 systemd-journald[1277]: Rotating system journal. Jan 17 12:06:15.168454 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:06:15.178337 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:06:15.193494 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:06:15.215589 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:06:15.229871 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:06:15.239798 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:06:15.251616 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:06:15.269932 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:06:15.278751 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:06:15.288523 udevadm[1318]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 12:06:15.294524 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 17 12:06:15.294536 systemd-tmpfiles[1316]: ACLs are not supported, ignoring. Jan 17 12:06:15.298132 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:06:15.307286 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:06:15.331320 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:06:15.339826 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:06:15.340574 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:06:15.357138 kernel: loop0: detected capacity change from 0 to 114328 Jan 17 12:06:15.410094 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:06:15.424404 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:06:15.442879 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jan 17 12:06:15.442899 systemd-tmpfiles[1337]: ACLs are not supported, ignoring. Jan 17 12:06:15.447331 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:06:15.750142 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:06:15.795125 kernel: loop1: detected capacity change from 0 to 114432 Jan 17 12:06:16.149166 kernel: loop2: detected capacity change from 0 to 31320 Jan 17 12:06:16.491205 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:06:16.509130 kernel: loop3: detected capacity change from 0 to 194096 Jan 17 12:06:16.510412 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:06:16.535001 systemd-udevd[1345]: Using default interface naming scheme 'v255'. Jan 17 12:06:16.544149 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 12:06:16.557170 kernel: loop5: detected capacity change from 0 to 114432 Jan 17 12:06:16.569132 kernel: loop6: detected capacity change from 0 to 31320 Jan 17 12:06:16.579138 kernel: loop7: detected capacity change from 0 to 194096 Jan 17 12:06:16.585178 (sd-merge)[1347]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 12:06:16.585646 (sd-merge)[1347]: Merged extensions into '/usr'. Jan 17 12:06:16.590427 systemd[1]: Reloading requested from client PID 1314 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:06:16.590584 systemd[1]: Reloading... Jan 17 12:06:16.655139 zram_generator::config[1376]: No configuration found. Jan 17 12:06:16.801386 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:16.857278 systemd[1]: Reloading finished in 266 ms. Jan 17 12:06:16.885209 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:06:16.901309 systemd[1]: Starting ensure-sysext.service... Jan 17 12:06:16.907346 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:06:16.919761 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:06:16.948459 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:06:16.963778 systemd-tmpfiles[1429]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:06:16.964026 systemd[1]: Reloading requested from client PID 1428 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:06:16.964037 systemd[1]: Reloading... Jan 17 12:06:16.964526 systemd-tmpfiles[1429]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:06:16.965266 systemd-tmpfiles[1429]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:06:16.965485 systemd-tmpfiles[1429]: ACLs are not supported, ignoring. Jan 17 12:06:16.965530 systemd-tmpfiles[1429]: ACLs are not supported, ignoring. Jan 17 12:06:16.988208 systemd-tmpfiles[1429]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:06:16.988446 systemd-tmpfiles[1429]: Skipping /boot Jan 17 12:06:17.013255 systemd-tmpfiles[1429]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:06:17.013267 systemd-tmpfiles[1429]: Skipping /boot Jan 17 12:06:17.089228 zram_generator::config[1480]: No configuration found. Jan 17 12:06:17.229247 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:06:17.252399 kernel: hv_vmbus: registering driver hv_balloon Jan 17 12:06:17.252507 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 12:06:17.259840 kernel: hv_balloon: Memory hot add disabled on ARM64 Jan 17 12:06:17.287195 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 12:06:17.304009 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 12:06:17.304114 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 12:06:17.315354 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:06:17.332347 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:06:17.334588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:17.345232 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1446) Jan 17 12:06:17.420899 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 17 12:06:17.421449 systemd[1]: Reloading finished in 457 ms. Jan 17 12:06:17.441166 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:06:17.484888 systemd[1]: Finished ensure-sysext.service. Jan 17 12:06:17.500342 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:06:17.512953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:06:17.527302 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:06:17.537365 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:06:17.550116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:06:17.551386 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:06:17.563329 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:06:17.572337 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:06:17.581018 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:06:17.589308 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:06:17.598447 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:06:17.601539 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:06:17.611307 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:06:17.621332 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:06:17.627391 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:06:17.644268 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:06:17.652842 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:06:17.665390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:06:17.674774 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:06:17.676143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:06:17.684737 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:06:17.684888 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:06:17.691674 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:06:17.691810 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:06:17.698921 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:06:17.699049 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:06:17.705520 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:06:17.711553 lvm[1595]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:06:17.724684 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:06:17.737051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:06:17.737741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:06:17.738460 augenrules[1623]: No rules Jan 17 12:06:17.740636 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:06:17.752851 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:06:17.762778 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:06:17.772863 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:06:17.789386 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:06:17.807328 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:06:17.818144 lvm[1641]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:06:17.847754 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:06:17.896190 systemd-resolved[1612]: Positive Trust Anchors: Jan 17 12:06:17.896206 systemd-resolved[1612]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:06:17.896239 systemd-resolved[1612]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:06:17.915309 systemd-resolved[1612]: Using system hostname 'ci-4081.3.0-a-c8756aff3b'. Jan 17 12:06:17.916650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:06:17.923236 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:06:17.941304 systemd-networkd[1442]: lo: Link UP Jan 17 12:06:17.941315 systemd-networkd[1442]: lo: Gained carrier Jan 17 12:06:17.943383 systemd-networkd[1442]: Enumeration completed Jan 17 12:06:17.943532 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:06:17.944234 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:17.944238 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:06:17.951600 systemd[1]: Reached target network.target - Network. Jan 17 12:06:17.962362 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:06:18.012134 kernel: mlx5_core f507:00:02.0 enP62727s1: Link up Jan 17 12:06:18.047712 kernel: hv_netvsc 002248bb-d413-0022-48bb-d413002248bb eth0: Data path switched to VF: enP62727s1 Jan 17 12:06:18.048282 systemd-networkd[1442]: enP62727s1: Link UP Jan 17 12:06:18.048372 systemd-networkd[1442]: eth0: Link UP Jan 17 12:06:18.048375 systemd-networkd[1442]: eth0: Gained carrier Jan 17 12:06:18.048390 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:18.053484 systemd-networkd[1442]: enP62727s1: Gained carrier Jan 17 12:06:18.063176 systemd-networkd[1442]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 12:06:18.172274 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:06:18.536954 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:06:18.546478 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:06:19.184243 systemd-networkd[1442]: eth0: Gained IPv6LL Jan 17 12:06:19.187163 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:06:19.195049 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:06:19.824247 systemd-networkd[1442]: enP62727s1: Gained IPv6LL Jan 17 12:06:21.254825 ldconfig[1309]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:06:21.271175 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:06:21.282336 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:06:21.297509 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:06:21.303928 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:06:21.309938 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:06:21.316585 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:06:21.323860 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:06:21.329577 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:06:21.336220 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:06:21.343745 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:06:21.343777 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:06:21.348723 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:06:21.354492 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:06:21.362275 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:06:21.374943 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:06:21.381044 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:06:21.387185 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:06:21.392328 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:06:21.397302 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:06:21.397333 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:06:21.408249 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 12:06:21.416271 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:06:21.427328 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:06:21.436327 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:06:21.443387 (chronyd)[1656]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 12:06:21.455953 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:06:21.465984 jq[1660]: false Jan 17 12:06:21.467364 chronyd[1665]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 12:06:21.474346 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:06:21.480015 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:06:21.480058 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 12:06:21.481280 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 12:06:21.486858 chronyd[1665]: Timezone right/UTC failed leap second check, ignoring Jan 17 12:06:21.487072 chronyd[1665]: Loaded seccomp filter (level 2) Jan 17 12:06:21.487486 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 12:06:21.490082 KVP[1667]: KVP starting; pid is:1667 Jan 17 12:06:21.496379 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:21.502835 KVP[1667]: KVP LIC Version: 3.1 Jan 17 12:06:21.506212 kernel: hv_utils: KVP IC version 4.0 Jan 17 12:06:21.516424 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:06:21.519230 extend-filesystems[1663]: Found loop4 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found loop5 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found loop6 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found loop7 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found sda Jan 17 12:06:21.519230 extend-filesystems[1663]: Found sda1 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found sda2 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found sda3 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found usr Jan 17 12:06:21.519230 extend-filesystems[1663]: Found sda4 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found sda6 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found sda7 Jan 17 12:06:21.519230 extend-filesystems[1663]: Found sda9 Jan 17 12:06:21.519230 extend-filesystems[1663]: Checking size of /dev/sda9 Jan 17 12:06:21.690127 extend-filesystems[1663]: Old size kept for /dev/sda9 Jan 17 12:06:21.690127 extend-filesystems[1663]: Found sr0 Jan 17 12:06:21.689641 dbus-daemon[1659]: [system] SELinux support is enabled Jan 17 12:06:21.524330 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:06:21.547295 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:06:21.555250 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:06:21.589382 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:06:21.623840 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:06:21.637557 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:06:21.642595 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:06:21.651399 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:06:21.660362 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:06:21.722046 jq[1695]: true Jan 17 12:06:21.682602 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 12:06:21.696426 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:06:21.713058 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:06:21.713247 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:06:21.713521 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:06:21.713662 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:06:21.731678 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:06:21.732170 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:06:21.740370 update_engine[1693]: I20250117 12:06:21.740276 1693 main.cc:92] Flatcar Update Engine starting Jan 17 12:06:21.742248 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:06:21.748729 update_engine[1693]: I20250117 12:06:21.748677 1693 update_check_scheduler.cc:74] Next update check in 11m27s Jan 17 12:06:21.798111 coreos-metadata[1658]: Jan 17 12:06:21.797 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:06:21.801159 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (1706) Jan 17 12:06:21.802522 coreos-metadata[1658]: Jan 17 12:06:21.802 INFO Fetch successful Jan 17 12:06:21.802522 coreos-metadata[1658]: Jan 17 12:06:21.802 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 12:06:21.808149 coreos-metadata[1658]: Jan 17 12:06:21.808 INFO Fetch successful Jan 17 12:06:21.808149 coreos-metadata[1658]: Jan 17 12:06:21.808 INFO Fetching http://168.63.129.16/machine/31aec935-5edc-47e6-b5be-f7ff7af6cd41/a5ce6093%2D0e76%2D46a6%2Db6fd%2D8c509cb95bc6.%5Fci%2D4081.3.0%2Da%2Dc8756aff3b?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 12:06:21.810422 coreos-metadata[1658]: Jan 17 12:06:21.810 INFO Fetch successful Jan 17 12:06:21.810541 coreos-metadata[1658]: Jan 17 12:06:21.810 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:06:21.825694 coreos-metadata[1658]: Jan 17 12:06:21.825 INFO Fetch successful Jan 17 12:06:21.827737 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:06:21.827933 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:06:21.842810 systemd-logind[1690]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 17 12:06:21.850900 systemd-logind[1690]: New seat seat0. Jan 17 12:06:21.854950 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:06:21.884540 jq[1728]: true Jan 17 12:06:21.909768 (ntainerd)[1735]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:06:21.921364 dbus-daemon[1659]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 17 12:06:21.940120 tar[1720]: linux-arm64/helm Jan 17 12:06:21.941370 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:06:21.962998 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:06:21.963209 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:06:21.976454 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:06:21.976576 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:06:21.992398 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:06:22.004887 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:06:22.014183 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:06:22.092983 bash[1775]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:06:22.094609 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:06:22.111190 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:06:22.273135 locksmithd[1762]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:06:22.683132 tar[1720]: linux-arm64/LICENSE Jan 17 12:06:22.683132 tar[1720]: linux-arm64/README.md Jan 17 12:06:22.694052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:22.706773 (kubelet)[1793]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:06:22.711202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:06:22.772761 sshd_keygen[1692]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:06:22.798250 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:06:22.812928 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:06:22.820401 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 12:06:22.827576 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:06:22.827869 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:06:22.857822 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:06:22.876824 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 12:06:22.899620 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:06:22.902723 containerd[1735]: time="2025-01-17T12:06:22.902648340Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:06:22.918516 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:06:22.928086 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 17 12:06:22.938600 containerd[1735]: time="2025-01-17T12:06:22.938489380Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:22.938930 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:06:22.940683 containerd[1735]: time="2025-01-17T12:06:22.940644140Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:22.940771 containerd[1735]: time="2025-01-17T12:06:22.940757980Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:06:22.940826 containerd[1735]: time="2025-01-17T12:06:22.940814100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:06:22.941030 containerd[1735]: time="2025-01-17T12:06:22.941012700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:06:22.941127 containerd[1735]: time="2025-01-17T12:06:22.941096780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:22.941251 containerd[1735]: time="2025-01-17T12:06:22.941233380Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:22.941306 containerd[1735]: time="2025-01-17T12:06:22.941293340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:22.941532 containerd[1735]: time="2025-01-17T12:06:22.941512540Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:22.941595 containerd[1735]: time="2025-01-17T12:06:22.941581580Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:22.941650 containerd[1735]: time="2025-01-17T12:06:22.941636060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:22.941705 containerd[1735]: time="2025-01-17T12:06:22.941692780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:22.941828 containerd[1735]: time="2025-01-17T12:06:22.941813420Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:22.942092 containerd[1735]: time="2025-01-17T12:06:22.942074260Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:06:22.942288 containerd[1735]: time="2025-01-17T12:06:22.942269660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:06:22.942363 containerd[1735]: time="2025-01-17T12:06:22.942350380Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:06:22.942558 containerd[1735]: time="2025-01-17T12:06:22.942480020Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:06:22.942558 containerd[1735]: time="2025-01-17T12:06:22.942528180Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.955948420Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956096980Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956132860Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956151100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956166700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956343300Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956564340Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956656300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956672860Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956686980Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956700580Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956713180Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956728620Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:06:22.957141 containerd[1735]: time="2025-01-17T12:06:22.956742460Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956756900Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956771020Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956784100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956797180Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956816980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956831260Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956844460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956857900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956869820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956883020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956897180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956910940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956924580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957476 containerd[1735]: time="2025-01-17T12:06:22.956939020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.956954300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.956969340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.956981700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.956997420Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957018780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957031420Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957041860Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957085340Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957125500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957137540Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957153340Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957162940Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957182940Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:06:22.957707 containerd[1735]: time="2025-01-17T12:06:22.957192980Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:06:22.957976 containerd[1735]: time="2025-01-17T12:06:22.957204580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:06:22.957996 containerd[1735]: time="2025-01-17T12:06:22.957497260Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:06:22.957996 containerd[1735]: time="2025-01-17T12:06:22.957554900Z" level=info msg="Connect containerd service" Jan 17 12:06:22.957996 containerd[1735]: time="2025-01-17T12:06:22.957592620Z" level=info msg="using legacy CRI server" Jan 17 12:06:22.957996 containerd[1735]: time="2025-01-17T12:06:22.957599020Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:06:22.957996 containerd[1735]: time="2025-01-17T12:06:22.957699060Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:06:22.959173 containerd[1735]: time="2025-01-17T12:06:22.959140220Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:06:22.960174 containerd[1735]: time="2025-01-17T12:06:22.960134500Z" level=info msg="Start subscribing containerd event" Jan 17 12:06:22.960214 containerd[1735]: time="2025-01-17T12:06:22.960190420Z" level=info msg="Start recovering state" Jan 17 12:06:22.960283 containerd[1735]: time="2025-01-17T12:06:22.960263460Z" level=info msg="Start event monitor" Jan 17 12:06:22.960283 containerd[1735]: time="2025-01-17T12:06:22.960281340Z" level=info msg="Start snapshots syncer" Jan 17 12:06:22.960334 containerd[1735]: time="2025-01-17T12:06:22.960293540Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:06:22.960334 containerd[1735]: time="2025-01-17T12:06:22.960301180Z" level=info msg="Start streaming server" Jan 17 12:06:22.960747 containerd[1735]: time="2025-01-17T12:06:22.960723620Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:06:22.960801 containerd[1735]: time="2025-01-17T12:06:22.960773420Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:06:22.961940 containerd[1735]: time="2025-01-17T12:06:22.960832220Z" level=info msg="containerd successfully booted in 0.058900s" Jan 17 12:06:22.960949 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:06:22.968922 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:06:22.976152 systemd[1]: Startup finished in 701ms (kernel) + 12.575s (initrd) + 11.985s (userspace) = 25.263s. Jan 17 12:06:23.316782 kubelet[1793]: E0117 12:06:23.316663 1793 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:06:23.319625 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:06:23.319911 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:06:23.441914 login[1822]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 17 12:06:23.443923 login[1823]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:23.452394 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:06:23.458399 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:06:23.460636 systemd-logind[1690]: New session 2 of user core. Jan 17 12:06:23.472578 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:06:23.480418 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:06:23.484222 (systemd)[1836]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:06:23.645084 systemd[1836]: Queued start job for default target default.target. Jan 17 12:06:23.650308 systemd[1836]: Created slice app.slice - User Application Slice. Jan 17 12:06:23.650346 systemd[1836]: Reached target paths.target - Paths. Jan 17 12:06:23.650360 systemd[1836]: Reached target timers.target - Timers. Jan 17 12:06:23.651831 systemd[1836]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:06:23.665142 systemd[1836]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:06:23.665222 systemd[1836]: Reached target sockets.target - Sockets. Jan 17 12:06:23.665236 systemd[1836]: Reached target basic.target - Basic System. Jan 17 12:06:23.665290 systemd[1836]: Reached target default.target - Main User Target. Jan 17 12:06:23.665320 systemd[1836]: Startup finished in 174ms. Jan 17 12:06:23.665551 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:06:23.673317 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:06:24.442335 login[1822]: pam_unix(login:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:06:24.446790 systemd-logind[1690]: New session 1 of user core. Jan 17 12:06:24.453291 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:06:24.660573 waagent[1817]: 2025-01-17T12:06:24.660443Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 12:06:24.666577 waagent[1817]: 2025-01-17T12:06:24.666485Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 17 12:06:24.671093 waagent[1817]: 2025-01-17T12:06:24.671020Z INFO Daemon Daemon Python: 3.11.9 Jan 17 12:06:24.675695 waagent[1817]: 2025-01-17T12:06:24.675439Z INFO Daemon Daemon Run daemon Jan 17 12:06:24.679600 waagent[1817]: 2025-01-17T12:06:24.679538Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 17 12:06:24.689028 waagent[1817]: 2025-01-17T12:06:24.688947Z INFO Daemon Daemon Using waagent for provisioning Jan 17 12:06:24.694403 waagent[1817]: 2025-01-17T12:06:24.694305Z INFO Daemon Daemon Activate resource disk Jan 17 12:06:24.699058 waagent[1817]: 2025-01-17T12:06:24.698994Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 12:06:24.711089 waagent[1817]: 2025-01-17T12:06:24.711007Z INFO Daemon Daemon Found device: None Jan 17 12:06:24.715678 waagent[1817]: 2025-01-17T12:06:24.715608Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 12:06:24.723913 waagent[1817]: 2025-01-17T12:06:24.723846Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 12:06:24.737468 waagent[1817]: 2025-01-17T12:06:24.737380Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 12:06:24.743405 waagent[1817]: 2025-01-17T12:06:24.743336Z INFO Daemon Daemon Running default provisioning handler Jan 17 12:06:24.757266 waagent[1817]: 2025-01-17T12:06:24.756560Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 12:06:24.772249 waagent[1817]: 2025-01-17T12:06:24.772170Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 12:06:24.785091 waagent[1817]: 2025-01-17T12:06:24.785013Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 12:06:24.793799 waagent[1817]: 2025-01-17T12:06:24.793712Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 12:06:24.893620 waagent[1817]: 2025-01-17T12:06:24.893476Z INFO Daemon Daemon Successfully mounted dvd Jan 17 12:06:24.933710 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 12:06:24.935672 waagent[1817]: 2025-01-17T12:06:24.935567Z INFO Daemon Daemon Detect protocol endpoint Jan 17 12:06:24.940591 waagent[1817]: 2025-01-17T12:06:24.940507Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 12:06:24.946576 waagent[1817]: 2025-01-17T12:06:24.946460Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 12:06:24.953192 waagent[1817]: 2025-01-17T12:06:24.953124Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 12:06:24.958544 waagent[1817]: 2025-01-17T12:06:24.958479Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 12:06:24.963628 waagent[1817]: 2025-01-17T12:06:24.963556Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 12:06:25.005643 waagent[1817]: 2025-01-17T12:06:25.005590Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 12:06:25.012471 waagent[1817]: 2025-01-17T12:06:25.012439Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 12:06:25.017820 waagent[1817]: 2025-01-17T12:06:25.017755Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 12:06:25.303206 waagent[1817]: 2025-01-17T12:06:25.302881Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 12:06:25.309784 waagent[1817]: 2025-01-17T12:06:25.309706Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 12:06:25.324767 waagent[1817]: 2025-01-17T12:06:25.324698Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 12:06:25.348325 waagent[1817]: 2025-01-17T12:06:25.348270Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 17 12:06:25.355178 waagent[1817]: 2025-01-17T12:06:25.355086Z INFO Daemon Jan 17 12:06:25.358518 waagent[1817]: 2025-01-17T12:06:25.358448Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 81470ba9-0eac-4c9b-ab7f-866fdc0e9475 eTag: 2703429626313901951 source: Fabric] Jan 17 12:06:25.370763 waagent[1817]: 2025-01-17T12:06:25.370700Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 12:06:25.381586 waagent[1817]: 2025-01-17T12:06:25.381520Z INFO Daemon Jan 17 12:06:25.384648 waagent[1817]: 2025-01-17T12:06:25.384577Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 12:06:25.396444 waagent[1817]: 2025-01-17T12:06:25.396396Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 12:06:25.491045 waagent[1817]: 2025-01-17T12:06:25.490940Z INFO Daemon Downloaded certificate {'thumbprint': '6A620DB71E1ED84C565C8F786F5B86127345CBF7', 'hasPrivateKey': True} Jan 17 12:06:25.501079 waagent[1817]: 2025-01-17T12:06:25.501020Z INFO Daemon Downloaded certificate {'thumbprint': '25A3B41C0160B31EA24DD0C32CF4444B97ED84AD', 'hasPrivateKey': False} Jan 17 12:06:25.511423 waagent[1817]: 2025-01-17T12:06:25.511358Z INFO Daemon Fetch goal state completed Jan 17 12:06:25.523301 waagent[1817]: 2025-01-17T12:06:25.523225Z INFO Daemon Daemon Starting provisioning Jan 17 12:06:25.528557 waagent[1817]: 2025-01-17T12:06:25.528483Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 12:06:25.534011 waagent[1817]: 2025-01-17T12:06:25.533943Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-c8756aff3b] Jan 17 12:06:25.543142 waagent[1817]: 2025-01-17T12:06:25.543049Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-c8756aff3b] Jan 17 12:06:25.550363 waagent[1817]: 2025-01-17T12:06:25.550284Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 12:06:25.557108 waagent[1817]: 2025-01-17T12:06:25.557005Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 12:06:25.618611 systemd-networkd[1442]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:06:25.618619 systemd-networkd[1442]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:06:25.618676 systemd-networkd[1442]: eth0: DHCP lease lost Jan 17 12:06:25.620258 waagent[1817]: 2025-01-17T12:06:25.620152Z INFO Daemon Daemon Create user account if not exists Jan 17 12:06:25.625950 waagent[1817]: 2025-01-17T12:06:25.625884Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 12:06:25.631724 waagent[1817]: 2025-01-17T12:06:25.631655Z INFO Daemon Daemon Configure sudoer Jan 17 12:06:25.631800 systemd-networkd[1442]: eth0: DHCPv6 lease lost Jan 17 12:06:25.636511 waagent[1817]: 2025-01-17T12:06:25.636435Z INFO Daemon Daemon Configure sshd Jan 17 12:06:25.641056 waagent[1817]: 2025-01-17T12:06:25.640994Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 12:06:25.658894 waagent[1817]: 2025-01-17T12:06:25.653550Z INFO Daemon Daemon Deploy ssh public key. Jan 17 12:06:25.667191 systemd-networkd[1442]: eth0: DHCPv4 address 10.200.20.40/24, gateway 10.200.20.1 acquired from 168.63.129.16 Jan 17 12:06:26.779813 waagent[1817]: 2025-01-17T12:06:26.779759Z INFO Daemon Daemon Provisioning complete Jan 17 12:06:26.799167 waagent[1817]: 2025-01-17T12:06:26.799091Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 12:06:26.806042 waagent[1817]: 2025-01-17T12:06:26.805975Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 12:06:26.817377 waagent[1817]: 2025-01-17T12:06:26.817309Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 12:06:26.960615 waagent[1890]: 2025-01-17T12:06:26.960211Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 12:06:26.960615 waagent[1890]: 2025-01-17T12:06:26.960382Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 17 12:06:26.960615 waagent[1890]: 2025-01-17T12:06:26.960443Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 12:06:27.327680 waagent[1890]: 2025-01-17T12:06:27.327576Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 12:06:27.327878 waagent[1890]: 2025-01-17T12:06:27.327837Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:06:27.327948 waagent[1890]: 2025-01-17T12:06:27.327916Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:06:27.341384 waagent[1890]: 2025-01-17T12:06:27.341302Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 12:06:27.347928 waagent[1890]: 2025-01-17T12:06:27.347882Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 17 12:06:27.348515 waagent[1890]: 2025-01-17T12:06:27.348470Z INFO ExtHandler Jan 17 12:06:27.348594 waagent[1890]: 2025-01-17T12:06:27.348562Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e854fd3b-0c77-4bb4-a29e-693ab3a4c293 eTag: 2703429626313901951 source: Fabric] Jan 17 12:06:27.348897 waagent[1890]: 2025-01-17T12:06:27.348857Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 12:06:27.349527 waagent[1890]: 2025-01-17T12:06:27.349479Z INFO ExtHandler Jan 17 12:06:27.349598 waagent[1890]: 2025-01-17T12:06:27.349567Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 12:06:27.353821 waagent[1890]: 2025-01-17T12:06:27.353777Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 12:06:27.481516 waagent[1890]: 2025-01-17T12:06:27.481407Z INFO ExtHandler Downloaded certificate {'thumbprint': '6A620DB71E1ED84C565C8F786F5B86127345CBF7', 'hasPrivateKey': True} Jan 17 12:06:27.482397 waagent[1890]: 2025-01-17T12:06:27.481947Z INFO ExtHandler Downloaded certificate {'thumbprint': '25A3B41C0160B31EA24DD0C32CF4444B97ED84AD', 'hasPrivateKey': False} Jan 17 12:06:27.482480 waagent[1890]: 2025-01-17T12:06:27.482428Z INFO ExtHandler Fetch goal state completed Jan 17 12:06:27.499882 waagent[1890]: 2025-01-17T12:06:27.499806Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1890 Jan 17 12:06:27.500054 waagent[1890]: 2025-01-17T12:06:27.500015Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 12:06:27.501919 waagent[1890]: 2025-01-17T12:06:27.501860Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 12:06:27.502357 waagent[1890]: 2025-01-17T12:06:27.502317Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 12:06:27.656038 waagent[1890]: 2025-01-17T12:06:27.655933Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 12:06:27.656221 waagent[1890]: 2025-01-17T12:06:27.656175Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 12:06:27.663256 waagent[1890]: 2025-01-17T12:06:27.663189Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 12:06:27.670397 systemd[1]: Reloading requested from client PID 1905 ('systemctl') (unit waagent.service)... Jan 17 12:06:27.670656 systemd[1]: Reloading... Jan 17 12:06:27.747144 zram_generator::config[1937]: No configuration found. Jan 17 12:06:27.854271 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:27.933442 systemd[1]: Reloading finished in 262 ms. Jan 17 12:06:27.956128 waagent[1890]: 2025-01-17T12:06:27.955292Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 12:06:27.962632 systemd[1]: Reloading requested from client PID 1993 ('systemctl') (unit waagent.service)... Jan 17 12:06:27.962776 systemd[1]: Reloading... Jan 17 12:06:28.060127 zram_generator::config[2030]: No configuration found. Jan 17 12:06:28.176744 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:06:28.255046 systemd[1]: Reloading finished in 291 ms. Jan 17 12:06:28.280230 waagent[1890]: 2025-01-17T12:06:28.279465Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 12:06:28.280230 waagent[1890]: 2025-01-17T12:06:28.279655Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 12:06:28.908149 waagent[1890]: 2025-01-17T12:06:28.907724Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 12:06:28.908463 waagent[1890]: 2025-01-17T12:06:28.908409Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 12:06:28.909330 waagent[1890]: 2025-01-17T12:06:28.909241Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 12:06:28.909741 waagent[1890]: 2025-01-17T12:06:28.909647Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 12:06:28.910239 waagent[1890]: 2025-01-17T12:06:28.910110Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 12:06:28.910338 waagent[1890]: 2025-01-17T12:06:28.910232Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 12:06:28.910871 waagent[1890]: 2025-01-17T12:06:28.910738Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 12:06:28.910957 waagent[1890]: 2025-01-17T12:06:28.910872Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 12:06:28.911342 waagent[1890]: 2025-01-17T12:06:28.911276Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:06:28.911499 waagent[1890]: 2025-01-17T12:06:28.911375Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:06:28.912884 waagent[1890]: 2025-01-17T12:06:28.912071Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:06:28.912884 waagent[1890]: 2025-01-17T12:06:28.912156Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:06:28.912884 waagent[1890]: 2025-01-17T12:06:28.912357Z INFO EnvHandler ExtHandler Configure routes Jan 17 12:06:28.912884 waagent[1890]: 2025-01-17T12:06:28.912428Z INFO EnvHandler ExtHandler Gateway:None Jan 17 12:06:28.912884 waagent[1890]: 2025-01-17T12:06:28.912473Z INFO EnvHandler ExtHandler Routes:None Jan 17 12:06:28.912884 waagent[1890]: 2025-01-17T12:06:28.912013Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 12:06:28.920022 waagent[1890]: 2025-01-17T12:06:28.919505Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 12:06:28.920022 waagent[1890]: 2025-01-17T12:06:28.919718Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 12:06:28.920022 waagent[1890]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 12:06:28.920022 waagent[1890]: eth0 00000000 0114C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 12:06:28.920022 waagent[1890]: eth0 0014C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 12:06:28.920022 waagent[1890]: eth0 0114C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:06:28.920022 waagent[1890]: eth0 10813FA8 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:06:28.920022 waagent[1890]: eth0 FEA9FEA9 0114C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:06:28.921836 waagent[1890]: 2025-01-17T12:06:28.921788Z INFO ExtHandler ExtHandler Jan 17 12:06:28.922049 waagent[1890]: 2025-01-17T12:06:28.922012Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6f94fe61-94d0-403e-a3e9-749328e5f2b5 correlation aee997c0-48c9-4b04-b342-c1701176cedd created: 2025-01-17T12:05:11.723305Z] Jan 17 12:06:28.922608 waagent[1890]: 2025-01-17T12:06:28.922561Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 12:06:28.923370 waagent[1890]: 2025-01-17T12:06:28.923325Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 17 12:06:28.971138 waagent[1890]: 2025-01-17T12:06:28.971009Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 880633DB-3F80-4B53-9F67-5A1923AB7AAA;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 12:06:28.976690 waagent[1890]: 2025-01-17T12:06:28.976610Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 12:06:28.976690 waagent[1890]: Executing ['ip', '-a', '-o', 'link']: Jan 17 12:06:28.976690 waagent[1890]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 12:06:28.976690 waagent[1890]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:d4:13 brd ff:ff:ff:ff:ff:ff Jan 17 12:06:28.976690 waagent[1890]: 3: enP62727s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:bb:d4:13 brd ff:ff:ff:ff:ff:ff\ altname enP62727p0s2 Jan 17 12:06:28.976690 waagent[1890]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 12:06:28.976690 waagent[1890]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 12:06:28.976690 waagent[1890]: 2: eth0 inet 10.200.20.40/24 metric 1024 brd 10.200.20.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 12:06:28.976690 waagent[1890]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 12:06:28.976690 waagent[1890]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 12:06:28.976690 waagent[1890]: 2: eth0 inet6 fe80::222:48ff:febb:d413/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 12:06:28.976690 waagent[1890]: 3: enP62727s1 inet6 fe80::222:48ff:febb:d413/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 12:06:29.048147 waagent[1890]: 2025-01-17T12:06:29.047036Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 12:06:29.048147 waagent[1890]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:06:29.048147 waagent[1890]: pkts bytes target prot opt in out source destination Jan 17 12:06:29.048147 waagent[1890]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:06:29.048147 waagent[1890]: pkts bytes target prot opt in out source destination Jan 17 12:06:29.048147 waagent[1890]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:06:29.048147 waagent[1890]: pkts bytes target prot opt in out source destination Jan 17 12:06:29.048147 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 12:06:29.048147 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 12:06:29.048147 waagent[1890]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 12:06:29.050675 waagent[1890]: 2025-01-17T12:06:29.050605Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 12:06:29.050675 waagent[1890]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:06:29.050675 waagent[1890]: pkts bytes target prot opt in out source destination Jan 17 12:06:29.050675 waagent[1890]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:06:29.050675 waagent[1890]: pkts bytes target prot opt in out source destination Jan 17 12:06:29.050675 waagent[1890]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:06:29.050675 waagent[1890]: pkts bytes target prot opt in out source destination Jan 17 12:06:29.050675 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 12:06:29.050675 waagent[1890]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 12:06:29.050675 waagent[1890]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 12:06:29.051302 waagent[1890]: 2025-01-17T12:06:29.051261Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 12:06:33.570629 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:06:33.578316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:33.676373 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:33.692400 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:06:33.760965 kubelet[2122]: E0117 12:06:33.760904 2122 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:06:33.763945 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:06:33.764079 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:06:44.014565 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:06:44.024319 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:44.284020 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:44.288807 (kubelet)[2139]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:06:44.329971 kubelet[2139]: E0117 12:06:44.329918 2139 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:06:44.332925 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:06:44.333068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:06:45.278771 chronyd[1665]: Selected source PHC0 Jan 17 12:06:54.554972 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:06:54.565339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:06:54.886118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:06:54.896411 (kubelet)[2155]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:06:54.937750 kubelet[2155]: E0117 12:06:54.937667 2155 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:06:54.940909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:06:54.941064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:07:05.054815 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:07:05.060338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:05.310770 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:05.328468 (kubelet)[2171]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:07:05.372376 kernel: hv_balloon: Max. dynamic memory size: 4096 MB Jan 17 12:07:05.372486 kubelet[2171]: E0117 12:07:05.368769 2171 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:07:05.375242 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:07:05.375391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:07:06.742138 update_engine[1693]: I20250117 12:07:06.741791 1693 update_attempter.cc:509] Updating boot flags... Jan 17 12:07:06.804213 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2190) Jan 17 12:07:06.894126 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 40 scanned by (udev-worker) (2183) Jan 17 12:07:15.554809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 12:07:15.560298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:15.838830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:15.850517 (kubelet)[2252]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:07:15.890433 kubelet[2252]: E0117 12:07:15.890369 2252 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:07:15.892559 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:07:15.892687 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:07:19.248986 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:07:19.255371 systemd[1]: Started sshd@0-10.200.20.40:22-10.200.16.10:44474.service - OpenSSH per-connection server daemon (10.200.16.10:44474). Jan 17 12:07:19.725329 sshd[2261]: Accepted publickey for core from 10.200.16.10 port 44474 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:07:19.726626 sshd[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:19.731356 systemd-logind[1690]: New session 3 of user core. Jan 17 12:07:19.738262 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:07:20.124610 systemd[1]: Started sshd@1-10.200.20.40:22-10.200.16.10:44476.service - OpenSSH per-connection server daemon (10.200.16.10:44476). Jan 17 12:07:20.551265 sshd[2266]: Accepted publickey for core from 10.200.16.10 port 44476 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:07:20.552568 sshd[2266]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:20.556432 systemd-logind[1690]: New session 4 of user core. Jan 17 12:07:20.563256 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:07:20.888004 sshd[2266]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:20.892444 systemd[1]: sshd@1-10.200.20.40:22-10.200.16.10:44476.service: Deactivated successfully. Jan 17 12:07:20.894052 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:07:20.895437 systemd-logind[1690]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:07:20.896563 systemd-logind[1690]: Removed session 4. Jan 17 12:07:20.962847 systemd[1]: Started sshd@2-10.200.20.40:22-10.200.16.10:44480.service - OpenSSH per-connection server daemon (10.200.16.10:44480). Jan 17 12:07:21.378318 sshd[2273]: Accepted publickey for core from 10.200.16.10 port 44480 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:07:21.379973 sshd[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:21.384695 systemd-logind[1690]: New session 5 of user core. Jan 17 12:07:21.394296 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:07:21.693167 sshd[2273]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:21.696597 systemd[1]: sshd@2-10.200.20.40:22-10.200.16.10:44480.service: Deactivated successfully. Jan 17 12:07:21.698137 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:07:21.698923 systemd-logind[1690]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:07:21.699882 systemd-logind[1690]: Removed session 5. Jan 17 12:07:21.768522 systemd[1]: Started sshd@3-10.200.20.40:22-10.200.16.10:44496.service - OpenSSH per-connection server daemon (10.200.16.10:44496). Jan 17 12:07:22.178056 sshd[2280]: Accepted publickey for core from 10.200.16.10 port 44496 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:07:22.180531 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:22.185404 systemd-logind[1690]: New session 6 of user core. Jan 17 12:07:22.191315 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:07:22.498974 sshd[2280]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:22.502736 systemd[1]: sshd@3-10.200.20.40:22-10.200.16.10:44496.service: Deactivated successfully. Jan 17 12:07:22.504334 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:07:22.504967 systemd-logind[1690]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:07:22.505882 systemd-logind[1690]: Removed session 6. Jan 17 12:07:22.571694 systemd[1]: Started sshd@4-10.200.20.40:22-10.200.16.10:44506.service - OpenSSH per-connection server daemon (10.200.16.10:44506). Jan 17 12:07:22.976822 sshd[2287]: Accepted publickey for core from 10.200.16.10 port 44506 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:07:22.978142 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:22.982814 systemd-logind[1690]: New session 7 of user core. Jan 17 12:07:22.990272 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:07:23.326366 sudo[2290]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:07:23.326642 sudo[2290]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:07:23.363949 sudo[2290]: pam_unix(sudo:session): session closed for user root Jan 17 12:07:23.449445 sshd[2287]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:23.452575 systemd[1]: sshd@4-10.200.20.40:22-10.200.16.10:44506.service: Deactivated successfully. Jan 17 12:07:23.454357 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:07:23.456226 systemd-logind[1690]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:07:23.457226 systemd-logind[1690]: Removed session 7. Jan 17 12:07:23.528418 systemd[1]: Started sshd@5-10.200.20.40:22-10.200.16.10:44510.service - OpenSSH per-connection server daemon (10.200.16.10:44510). Jan 17 12:07:23.967916 sshd[2295]: Accepted publickey for core from 10.200.16.10 port 44510 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:07:23.969571 sshd[2295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:23.974184 systemd-logind[1690]: New session 8 of user core. Jan 17 12:07:23.982276 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:07:24.215217 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:07:24.215497 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:07:24.218589 sudo[2299]: pam_unix(sudo:session): session closed for user root Jan 17 12:07:24.223592 sudo[2298]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:07:24.223847 sudo[2298]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:07:24.245381 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:07:24.246939 auditctl[2302]: No rules Jan 17 12:07:24.247262 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:07:24.247451 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:07:24.249706 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:07:24.272688 augenrules[2320]: No rules Jan 17 12:07:24.273952 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:07:24.275373 sudo[2298]: pam_unix(sudo:session): session closed for user root Jan 17 12:07:24.364771 sshd[2295]: pam_unix(sshd:session): session closed for user core Jan 17 12:07:24.368198 systemd[1]: sshd@5-10.200.20.40:22-10.200.16.10:44510.service: Deactivated successfully. Jan 17 12:07:24.369703 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:07:24.371510 systemd-logind[1690]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:07:24.372521 systemd-logind[1690]: Removed session 8. Jan 17 12:07:24.443378 systemd[1]: Started sshd@6-10.200.20.40:22-10.200.16.10:44516.service - OpenSSH per-connection server daemon (10.200.16.10:44516). Jan 17 12:07:24.872981 sshd[2328]: Accepted publickey for core from 10.200.16.10 port 44516 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:07:24.874327 sshd[2328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:07:24.878032 systemd-logind[1690]: New session 9 of user core. Jan 17 12:07:24.885279 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:07:25.120225 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:07:25.120506 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:07:26.054674 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 12:07:26.062756 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:26.130436 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:07:26.130877 (dockerd)[2350]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:07:26.419446 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:26.428590 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:07:26.469722 kubelet[2356]: E0117 12:07:26.469672 2356 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:07:26.472601 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:07:26.472761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:07:26.929141 dockerd[2350]: time="2025-01-17T12:07:26.928835515Z" level=info msg="Starting up" Jan 17 12:07:27.353166 dockerd[2350]: time="2025-01-17T12:07:27.353116126Z" level=info msg="Loading containers: start." Jan 17 12:07:27.507141 kernel: Initializing XFRM netlink socket Jan 17 12:07:27.687777 systemd-networkd[1442]: docker0: Link UP Jan 17 12:07:27.716762 dockerd[2350]: time="2025-01-17T12:07:27.716709913Z" level=info msg="Loading containers: done." Jan 17 12:07:27.734935 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3081493984-merged.mount: Deactivated successfully. Jan 17 12:07:27.746177 dockerd[2350]: time="2025-01-17T12:07:27.746094963Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:07:27.746354 dockerd[2350]: time="2025-01-17T12:07:27.746264964Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:07:27.746421 dockerd[2350]: time="2025-01-17T12:07:27.746404044Z" level=info msg="Daemon has completed initialization" Jan 17 12:07:27.802577 dockerd[2350]: time="2025-01-17T12:07:27.802428220Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:07:27.803598 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:07:29.393307 containerd[1735]: time="2025-01-17T12:07:29.393263681Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 17 12:07:30.283900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684583810.mount: Deactivated successfully. Jan 17 12:07:31.612856 containerd[1735]: time="2025-01-17T12:07:31.612810545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:31.615877 containerd[1735]: time="2025-01-17T12:07:31.615829791Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=29864935" Jan 17 12:07:31.618402 containerd[1735]: time="2025-01-17T12:07:31.618346835Z" level=info msg="ImageCreate event name:\"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:31.622995 containerd[1735]: time="2025-01-17T12:07:31.622908563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:31.624324 containerd[1735]: time="2025-01-17T12:07:31.624088605Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"29861735\" in 2.230781444s" Jan 17 12:07:31.624324 containerd[1735]: time="2025-01-17T12:07:31.624151925Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:5a490fe478de4f27039cf07d124901df2a58010e72f7afe3f65c70c05ada6715\"" Jan 17 12:07:31.645673 containerd[1735]: time="2025-01-17T12:07:31.645621202Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 17 12:07:33.057644 containerd[1735]: time="2025-01-17T12:07:33.057580887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:33.061287 containerd[1735]: time="2025-01-17T12:07:33.061248613Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=26901561" Jan 17 12:07:33.069144 containerd[1735]: time="2025-01-17T12:07:33.068952465Z" level=info msg="ImageCreate event name:\"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:33.074725 containerd[1735]: time="2025-01-17T12:07:33.074651954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:33.075819 containerd[1735]: time="2025-01-17T12:07:33.075779436Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"28305351\" in 1.430112354s" Jan 17 12:07:33.075887 containerd[1735]: time="2025-01-17T12:07:33.075821276Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:cd43f1277f3b33fd1db15e7f98b093eb07e4d4530ff326356591daeb16369ca2\"" Jan 17 12:07:33.096394 containerd[1735]: time="2025-01-17T12:07:33.096334349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 17 12:07:34.166192 containerd[1735]: time="2025-01-17T12:07:34.166130903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:34.168432 containerd[1735]: time="2025-01-17T12:07:34.168378627Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=16164338" Jan 17 12:07:34.174303 containerd[1735]: time="2025-01-17T12:07:34.174242276Z" level=info msg="ImageCreate event name:\"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:34.181127 containerd[1735]: time="2025-01-17T12:07:34.179483163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:34.182371 containerd[1735]: time="2025-01-17T12:07:34.182320648Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"17568146\" in 1.085943819s" Jan 17 12:07:34.182470 containerd[1735]: time="2025-01-17T12:07:34.182378488Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:4ebb50f72fd1ba66a57f91b338174ab72034493ff261ebb9bbfd717d882178ce\"" Jan 17 12:07:34.204385 containerd[1735]: time="2025-01-17T12:07:34.204352201Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 17 12:07:35.654562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702034475.mount: Deactivated successfully. Jan 17 12:07:35.975982 containerd[1735]: time="2025-01-17T12:07:35.975853912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:35.978435 containerd[1735]: time="2025-01-17T12:07:35.978394315Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=25662712" Jan 17 12:07:35.983158 containerd[1735]: time="2025-01-17T12:07:35.982828562Z" level=info msg="ImageCreate event name:\"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:35.988469 containerd[1735]: time="2025-01-17T12:07:35.988413691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:35.989490 containerd[1735]: time="2025-01-17T12:07:35.989003131Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"25661731\" in 1.78445017s" Jan 17 12:07:35.989490 containerd[1735]: time="2025-01-17T12:07:35.989043692Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:d97113839930faa5ab88f70aff4bfb62f7381074a290dd5aadbec9b16b2567a2\"" Jan 17 12:07:36.008493 containerd[1735]: time="2025-01-17T12:07:36.008444721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:07:36.554670 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 17 12:07:36.561378 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:36.806239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:36.812715 (kubelet)[2601]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:07:36.859676 kubelet[2601]: E0117 12:07:36.859619 2601 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:07:36.862366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:07:36.862698 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:07:37.053921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1320490237.mount: Deactivated successfully. Jan 17 12:07:38.073041 containerd[1735]: time="2025-01-17T12:07:38.072982553Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:38.075970 containerd[1735]: time="2025-01-17T12:07:38.075840758Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jan 17 12:07:38.094827 containerd[1735]: time="2025-01-17T12:07:38.094766146Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:38.100669 containerd[1735]: time="2025-01-17T12:07:38.100485835Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:38.101667 containerd[1735]: time="2025-01-17T12:07:38.101523596Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 2.093035595s" Jan 17 12:07:38.101667 containerd[1735]: time="2025-01-17T12:07:38.101563316Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jan 17 12:07:38.121735 containerd[1735]: time="2025-01-17T12:07:38.121694227Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:07:38.736876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4191830510.mount: Deactivated successfully. Jan 17 12:07:38.764366 containerd[1735]: time="2025-01-17T12:07:38.764306675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:38.766735 containerd[1735]: time="2025-01-17T12:07:38.766556759Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jan 17 12:07:38.772586 containerd[1735]: time="2025-01-17T12:07:38.772527848Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:38.777537 containerd[1735]: time="2025-01-17T12:07:38.777479215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:38.778414 containerd[1735]: time="2025-01-17T12:07:38.778279297Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 656.54383ms" Jan 17 12:07:38.778414 containerd[1735]: time="2025-01-17T12:07:38.778315497Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jan 17 12:07:38.797768 containerd[1735]: time="2025-01-17T12:07:38.797654246Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 17 12:07:39.448315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402725376.mount: Deactivated successfully. Jan 17 12:07:43.331777 containerd[1735]: time="2025-01-17T12:07:43.331717016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:43.334254 containerd[1735]: time="2025-01-17T12:07:43.333983100Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jan 17 12:07:43.337869 containerd[1735]: time="2025-01-17T12:07:43.337833227Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:43.343423 containerd[1735]: time="2025-01-17T12:07:43.343377197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:07:43.344760 containerd[1735]: time="2025-01-17T12:07:43.344618519Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.546889233s" Jan 17 12:07:43.344760 containerd[1735]: time="2025-01-17T12:07:43.344657319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jan 17 12:07:47.055667 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 17 12:07:47.065573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:47.320303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:47.326338 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:07:47.377128 kubelet[2780]: E0117 12:07:47.376395 2780 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:07:47.379371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:07:47.379902 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:07:48.283361 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:48.291422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:48.311970 systemd[1]: Reloading requested from client PID 2794 ('systemctl') (unit session-9.scope)... Jan 17 12:07:48.312059 systemd[1]: Reloading... Jan 17 12:07:48.433180 zram_generator::config[2832]: No configuration found. Jan 17 12:07:48.543753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:07:48.621373 systemd[1]: Reloading finished in 308 ms. Jan 17 12:07:48.665299 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 17 12:07:48.665379 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 17 12:07:48.665705 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:48.668811 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:48.777934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:48.787438 (kubelet)[2902]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:07:48.828648 kubelet[2902]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:07:48.828648 kubelet[2902]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:07:48.828648 kubelet[2902]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:07:48.829013 kubelet[2902]: I0117 12:07:48.828692 2902 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:07:49.673715 kubelet[2902]: I0117 12:07:49.673682 2902 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:07:49.673897 kubelet[2902]: I0117 12:07:49.673886 2902 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:07:49.674182 kubelet[2902]: I0117 12:07:49.674162 2902 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:07:49.686034 kubelet[2902]: E0117 12:07:49.685999 2902 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.687720 kubelet[2902]: I0117 12:07:49.687420 2902 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:07:49.695452 kubelet[2902]: I0117 12:07:49.695428 2902 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:07:49.696855 kubelet[2902]: I0117 12:07:49.696817 2902 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:07:49.697406 kubelet[2902]: I0117 12:07:49.696958 2902 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-c8756aff3b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:07:49.697406 kubelet[2902]: I0117 12:07:49.697168 2902 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:07:49.697406 kubelet[2902]: I0117 12:07:49.697180 2902 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:07:49.697406 kubelet[2902]: I0117 12:07:49.697297 2902 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:07:49.698616 kubelet[2902]: W0117 12:07:49.698541 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-c8756aff3b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.698616 kubelet[2902]: E0117 12:07:49.698598 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-c8756aff3b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.700132 kubelet[2902]: I0117 12:07:49.698719 2902 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:07:49.700132 kubelet[2902]: I0117 12:07:49.698741 2902 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:07:49.700132 kubelet[2902]: I0117 12:07:49.698775 2902 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:07:49.700132 kubelet[2902]: I0117 12:07:49.698793 2902 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:07:49.700132 kubelet[2902]: I0117 12:07:49.699409 2902 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:07:49.700132 kubelet[2902]: I0117 12:07:49.699562 2902 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:07:49.700132 kubelet[2902]: W0117 12:07:49.699604 2902 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:07:49.700132 kubelet[2902]: I0117 12:07:49.700140 2902 server.go:1264] "Started kubelet" Jan 17 12:07:49.701881 kubelet[2902]: I0117 12:07:49.701839 2902 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:07:49.705836 kubelet[2902]: I0117 12:07:49.705771 2902 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:07:49.707182 kubelet[2902]: I0117 12:07:49.707160 2902 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:07:49.709647 kubelet[2902]: I0117 12:07:49.709537 2902 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:07:49.709810 kubelet[2902]: I0117 12:07:49.709760 2902 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:07:49.712112 kubelet[2902]: W0117 12:07:49.711971 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.712112 kubelet[2902]: E0117 12:07:49.712027 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.715205 kubelet[2902]: I0117 12:07:49.715138 2902 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:07:49.716533 kubelet[2902]: I0117 12:07:49.716506 2902 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:07:49.716973 kubelet[2902]: E0117 12:07:49.716859 2902 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.20.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.20.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-c8756aff3b.181b797f7a3d87b0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-c8756aff3b,UID:ci-4081.3.0-a-c8756aff3b,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-c8756aff3b,},FirstTimestamp:2025-01-17 12:07:49.700118448 +0000 UTC m=+0.909492199,LastTimestamp:2025-01-17 12:07:49.700118448 +0000 UTC m=+0.909492199,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-c8756aff3b,}" Jan 17 12:07:49.717830 kubelet[2902]: I0117 12:07:49.717680 2902 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:07:49.718942 kubelet[2902]: W0117 12:07:49.718831 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.719074 kubelet[2902]: E0117 12:07:49.719056 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.720139 kubelet[2902]: E0117 12:07:49.719172 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c8756aff3b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="200ms" Jan 17 12:07:49.720139 kubelet[2902]: I0117 12:07:49.719602 2902 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:07:49.720139 kubelet[2902]: I0117 12:07:49.719690 2902 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:07:49.721039 kubelet[2902]: E0117 12:07:49.720789 2902 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:07:49.721337 kubelet[2902]: I0117 12:07:49.721307 2902 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:07:49.742914 kubelet[2902]: I0117 12:07:49.742770 2902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:07:49.744893 kubelet[2902]: I0117 12:07:49.744856 2902 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:07:49.744893 kubelet[2902]: I0117 12:07:49.744897 2902 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:07:49.745035 kubelet[2902]: I0117 12:07:49.744975 2902 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:07:49.745035 kubelet[2902]: E0117 12:07:49.745020 2902 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:07:49.747485 kubelet[2902]: W0117 12:07:49.746997 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.747485 kubelet[2902]: E0117 12:07:49.747063 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:49.748407 kubelet[2902]: I0117 12:07:49.748344 2902 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:07:49.748407 kubelet[2902]: I0117 12:07:49.748360 2902 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:07:49.748407 kubelet[2902]: I0117 12:07:49.748381 2902 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:07:49.755318 kubelet[2902]: I0117 12:07:49.755219 2902 policy_none.go:49] "None policy: Start" Jan 17 12:07:49.755950 kubelet[2902]: I0117 12:07:49.755927 2902 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:07:49.756036 kubelet[2902]: I0117 12:07:49.755956 2902 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:07:49.767142 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 12:07:49.782865 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 12:07:49.785730 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 12:07:49.793631 kubelet[2902]: I0117 12:07:49.793599 2902 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:07:49.794397 kubelet[2902]: I0117 12:07:49.793794 2902 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:07:49.794397 kubelet[2902]: I0117 12:07:49.793895 2902 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:07:49.795736 kubelet[2902]: E0117 12:07:49.795612 2902 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-c8756aff3b\" not found" Jan 17 12:07:49.818389 kubelet[2902]: I0117 12:07:49.818341 2902 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.818733 kubelet[2902]: E0117 12:07:49.818705 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.846130 kubelet[2902]: I0117 12:07:49.845914 2902 topology_manager.go:215] "Topology Admit Handler" podUID="ec721c75953d8e7d0a537a5c0ab69fb7" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.847856 kubelet[2902]: I0117 12:07:49.847710 2902 topology_manager.go:215] "Topology Admit Handler" podUID="3411bc22a95e172a32aaa89d5c10c5af" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.849491 kubelet[2902]: I0117 12:07:49.849262 2902 topology_manager.go:215] "Topology Admit Handler" podUID="55002162ce43a351edb74b3acb242260" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.856337 systemd[1]: Created slice kubepods-burstable-podec721c75953d8e7d0a537a5c0ab69fb7.slice - libcontainer container kubepods-burstable-podec721c75953d8e7d0a537a5c0ab69fb7.slice. Jan 17 12:07:49.867817 systemd[1]: Created slice kubepods-burstable-pod3411bc22a95e172a32aaa89d5c10c5af.slice - libcontainer container kubepods-burstable-pod3411bc22a95e172a32aaa89d5c10c5af.slice. Jan 17 12:07:49.875194 systemd[1]: Created slice kubepods-burstable-pod55002162ce43a351edb74b3acb242260.slice - libcontainer container kubepods-burstable-pod55002162ce43a351edb74b3acb242260.slice. Jan 17 12:07:49.918239 kubelet[2902]: I0117 12:07:49.918203 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.918365 kubelet[2902]: I0117 12:07:49.918277 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.918365 kubelet[2902]: I0117 12:07:49.918300 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec721c75953d8e7d0a537a5c0ab69fb7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-c8756aff3b\" (UID: \"ec721c75953d8e7d0a537a5c0ab69fb7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.918365 kubelet[2902]: I0117 12:07:49.918317 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.918365 kubelet[2902]: I0117 12:07:49.918356 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.918458 kubelet[2902]: I0117 12:07:49.918372 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55002162ce43a351edb74b3acb242260-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-c8756aff3b\" (UID: \"55002162ce43a351edb74b3acb242260\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.918458 kubelet[2902]: I0117 12:07:49.918386 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec721c75953d8e7d0a537a5c0ab69fb7-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-c8756aff3b\" (UID: \"ec721c75953d8e7d0a537a5c0ab69fb7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.918458 kubelet[2902]: I0117 12:07:49.918434 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec721c75953d8e7d0a537a5c0ab69fb7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-c8756aff3b\" (UID: \"ec721c75953d8e7d0a537a5c0ab69fb7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.918458 kubelet[2902]: I0117 12:07:49.918452 2902 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:49.920608 kubelet[2902]: E0117 12:07:49.920569 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c8756aff3b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="400ms" Jan 17 12:07:50.021139 kubelet[2902]: I0117 12:07:50.020469 2902 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:50.021139 kubelet[2902]: E0117 12:07:50.020787 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:50.166131 containerd[1735]: time="2025-01-17T12:07:50.166023159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-c8756aff3b,Uid:ec721c75953d8e7d0a537a5c0ab69fb7,Namespace:kube-system,Attempt:0,}" Jan 17 12:07:50.173270 containerd[1735]: time="2025-01-17T12:07:50.173232572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-c8756aff3b,Uid:3411bc22a95e172a32aaa89d5c10c5af,Namespace:kube-system,Attempt:0,}" Jan 17 12:07:50.178271 containerd[1735]: time="2025-01-17T12:07:50.177896140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-c8756aff3b,Uid:55002162ce43a351edb74b3acb242260,Namespace:kube-system,Attempt:0,}" Jan 17 12:07:50.322541 kubelet[2902]: E0117 12:07:50.322474 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c8756aff3b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="800ms" Jan 17 12:07:50.423918 kubelet[2902]: I0117 12:07:50.423541 2902 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:50.423918 kubelet[2902]: E0117 12:07:50.423868 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:50.704606 kubelet[2902]: W0117 12:07:50.704460 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:50.704606 kubelet[2902]: E0117 12:07:50.704509 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.20.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:51.123509 kubelet[2902]: E0117 12:07:51.123466 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c8756aff3b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="1.6s" Jan 17 12:07:51.142017 kubelet[2902]: W0117 12:07:51.141986 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:51.142127 kubelet[2902]: E0117 12:07:51.142027 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.20.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:51.224999 kubelet[2902]: W0117 12:07:51.224909 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:51.224999 kubelet[2902]: E0117 12:07:51.224961 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.20.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:51.226680 kubelet[2902]: I0117 12:07:51.226307 2902 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:51.226680 kubelet[2902]: E0117 12:07:51.226622 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:51.254428 kubelet[2902]: W0117 12:07:51.254374 2902 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-c8756aff3b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:51.254589 kubelet[2902]: E0117 12:07:51.254568 2902 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.20.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-c8756aff3b&limit=500&resourceVersion=0": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:51.756033 kubelet[2902]: E0117 12:07:51.755994 2902 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.20.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.20.40:6443: connect: connection refused Jan 17 12:07:51.799174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201355929.mount: Deactivated successfully. Jan 17 12:07:51.820720 containerd[1735]: time="2025-01-17T12:07:51.820669992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:07:51.831743 containerd[1735]: time="2025-01-17T12:07:51.831702931Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 12:07:51.835913 containerd[1735]: time="2025-01-17T12:07:51.835882179Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:07:51.840360 containerd[1735]: time="2025-01-17T12:07:51.840328187Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:07:51.843565 containerd[1735]: time="2025-01-17T12:07:51.843501792Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:07:51.849278 containerd[1735]: time="2025-01-17T12:07:51.849186202Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:07:51.850667 containerd[1735]: time="2025-01-17T12:07:51.850641805Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:07:51.854530 containerd[1735]: time="2025-01-17T12:07:51.854482732Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:07:51.855813 containerd[1735]: time="2025-01-17T12:07:51.855332053Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.689229654s" Jan 17 12:07:51.860479 containerd[1735]: time="2025-01-17T12:07:51.860435462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.68699961s" Jan 17 12:07:51.872925 containerd[1735]: time="2025-01-17T12:07:51.872884005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 1.694904905s" Jan 17 12:07:52.512740 containerd[1735]: time="2025-01-17T12:07:52.512552346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:52.512740 containerd[1735]: time="2025-01-17T12:07:52.512606706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:52.512740 containerd[1735]: time="2025-01-17T12:07:52.512627226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:52.514186 containerd[1735]: time="2025-01-17T12:07:52.514050869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:52.520666 containerd[1735]: time="2025-01-17T12:07:52.519381118Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:52.521302 containerd[1735]: time="2025-01-17T12:07:52.520727681Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:52.521377 containerd[1735]: time="2025-01-17T12:07:52.521326442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:52.521601 containerd[1735]: time="2025-01-17T12:07:52.521558882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:52.522169 containerd[1735]: time="2025-01-17T12:07:52.521240682Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:07:52.522169 containerd[1735]: time="2025-01-17T12:07:52.522013403Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:07:52.522169 containerd[1735]: time="2025-01-17T12:07:52.522024923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:52.522169 containerd[1735]: time="2025-01-17T12:07:52.522115563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:07:52.542299 systemd[1]: Started cri-containerd-24cd450e837b4fc7032012e094fa8930dec5eb5b6ef8b38e0925bf5b49f08960.scope - libcontainer container 24cd450e837b4fc7032012e094fa8930dec5eb5b6ef8b38e0925bf5b49f08960. Jan 17 12:07:52.543343 systemd[1]: Started cri-containerd-254c9c3a1883267764298283ce1264d6c7e6f9f011346e3f82e519076ad2abdc.scope - libcontainer container 254c9c3a1883267764298283ce1264d6c7e6f9f011346e3f82e519076ad2abdc. Jan 17 12:07:52.548441 systemd[1]: Started cri-containerd-d983cb234edee94d22ee255dd0a9922308a485e645c5a7ba6c60ec772c31be79.scope - libcontainer container d983cb234edee94d22ee255dd0a9922308a485e645c5a7ba6c60ec772c31be79. Jan 17 12:07:52.588118 containerd[1735]: time="2025-01-17T12:07:52.588063921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-c8756aff3b,Uid:55002162ce43a351edb74b3acb242260,Namespace:kube-system,Attempt:0,} returns sandbox id \"24cd450e837b4fc7032012e094fa8930dec5eb5b6ef8b38e0925bf5b49f08960\"" Jan 17 12:07:52.602428 containerd[1735]: time="2025-01-17T12:07:52.602386706Z" level=info msg="CreateContainer within sandbox \"24cd450e837b4fc7032012e094fa8930dec5eb5b6ef8b38e0925bf5b49f08960\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:07:52.612678 containerd[1735]: time="2025-01-17T12:07:52.612642965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-c8756aff3b,Uid:3411bc22a95e172a32aaa89d5c10c5af,Namespace:kube-system,Attempt:0,} returns sandbox id \"254c9c3a1883267764298283ce1264d6c7e6f9f011346e3f82e519076ad2abdc\"" Jan 17 12:07:52.614277 containerd[1735]: time="2025-01-17T12:07:52.614211808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-c8756aff3b,Uid:ec721c75953d8e7d0a537a5c0ab69fb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"d983cb234edee94d22ee255dd0a9922308a485e645c5a7ba6c60ec772c31be79\"" Jan 17 12:07:52.617923 containerd[1735]: time="2025-01-17T12:07:52.617883174Z" level=info msg="CreateContainer within sandbox \"254c9c3a1883267764298283ce1264d6c7e6f9f011346e3f82e519076ad2abdc\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:07:52.619117 containerd[1735]: time="2025-01-17T12:07:52.619065496Z" level=info msg="CreateContainer within sandbox \"d983cb234edee94d22ee255dd0a9922308a485e645c5a7ba6c60ec772c31be79\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:07:52.673289 containerd[1735]: time="2025-01-17T12:07:52.673244553Z" level=info msg="CreateContainer within sandbox \"24cd450e837b4fc7032012e094fa8930dec5eb5b6ef8b38e0925bf5b49f08960\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1bad12623b7d18f8b5e850fb4844de5b0010c02046033254be13759a2009cd37\"" Jan 17 12:07:52.674114 containerd[1735]: time="2025-01-17T12:07:52.674058834Z" level=info msg="StartContainer for \"1bad12623b7d18f8b5e850fb4844de5b0010c02046033254be13759a2009cd37\"" Jan 17 12:07:52.685979 containerd[1735]: time="2025-01-17T12:07:52.685919815Z" level=info msg="CreateContainer within sandbox \"254c9c3a1883267764298283ce1264d6c7e6f9f011346e3f82e519076ad2abdc\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"de3aa2b4e68de455f9e68f58cf12a95001a94b478535a0994b2075a9b5a6e29b\"" Jan 17 12:07:52.686916 containerd[1735]: time="2025-01-17T12:07:52.686818017Z" level=info msg="StartContainer for \"de3aa2b4e68de455f9e68f58cf12a95001a94b478535a0994b2075a9b5a6e29b\"" Jan 17 12:07:52.691130 containerd[1735]: time="2025-01-17T12:07:52.690628544Z" level=info msg="CreateContainer within sandbox \"d983cb234edee94d22ee255dd0a9922308a485e645c5a7ba6c60ec772c31be79\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e92cec37ff1c5612037ae21faa129ee1a4ea21534f4046488af8e8c0618cb29d\"" Jan 17 12:07:52.692310 containerd[1735]: time="2025-01-17T12:07:52.692275747Z" level=info msg="StartContainer for \"e92cec37ff1c5612037ae21faa129ee1a4ea21534f4046488af8e8c0618cb29d\"" Jan 17 12:07:52.701296 systemd[1]: Started cri-containerd-1bad12623b7d18f8b5e850fb4844de5b0010c02046033254be13759a2009cd37.scope - libcontainer container 1bad12623b7d18f8b5e850fb4844de5b0010c02046033254be13759a2009cd37. Jan 17 12:07:52.724383 kubelet[2902]: E0117 12:07:52.724089 2902 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.20.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-c8756aff3b?timeout=10s\": dial tcp 10.200.20.40:6443: connect: connection refused" interval="3.2s" Jan 17 12:07:52.727640 systemd[1]: Started cri-containerd-de3aa2b4e68de455f9e68f58cf12a95001a94b478535a0994b2075a9b5a6e29b.scope - libcontainer container de3aa2b4e68de455f9e68f58cf12a95001a94b478535a0994b2075a9b5a6e29b. Jan 17 12:07:52.735281 systemd[1]: Started cri-containerd-e92cec37ff1c5612037ae21faa129ee1a4ea21534f4046488af8e8c0618cb29d.scope - libcontainer container e92cec37ff1c5612037ae21faa129ee1a4ea21534f4046488af8e8c0618cb29d. Jan 17 12:07:52.763993 containerd[1735]: time="2025-01-17T12:07:52.763882075Z" level=info msg="StartContainer for \"1bad12623b7d18f8b5e850fb4844de5b0010c02046033254be13759a2009cd37\" returns successfully" Jan 17 12:07:52.808071 containerd[1735]: time="2025-01-17T12:07:52.808005233Z" level=info msg="StartContainer for \"de3aa2b4e68de455f9e68f58cf12a95001a94b478535a0994b2075a9b5a6e29b\" returns successfully" Jan 17 12:07:52.826663 containerd[1735]: time="2025-01-17T12:07:52.826612907Z" level=info msg="StartContainer for \"e92cec37ff1c5612037ae21faa129ee1a4ea21534f4046488af8e8c0618cb29d\" returns successfully" Jan 17 12:07:52.830426 kubelet[2902]: I0117 12:07:52.830277 2902 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:52.830817 kubelet[2902]: E0117 12:07:52.830663 2902 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.20.40:6443/api/v1/nodes\": dial tcp 10.200.20.40:6443: connect: connection refused" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:55.425413 kubelet[2902]: E0117 12:07:55.425366 2902 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.0-a-c8756aff3b" not found Jan 17 12:07:55.712499 kubelet[2902]: I0117 12:07:55.712139 2902 apiserver.go:52] "Watching apiserver" Jan 17 12:07:55.717213 kubelet[2902]: I0117 12:07:55.717168 2902 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:07:55.823756 kubelet[2902]: E0117 12:07:55.823720 2902 csi_plugin.go:308] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "ci-4081.3.0-a-c8756aff3b" not found Jan 17 12:07:55.938413 kubelet[2902]: E0117 12:07:55.938343 2902 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-c8756aff3b\" not found" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:56.034420 kubelet[2902]: I0117 12:07:56.033961 2902 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:56.049072 kubelet[2902]: I0117 12:07:56.049032 2902 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:57.219672 systemd[1]: Reloading requested from client PID 3175 ('systemctl') (unit session-9.scope)... Jan 17 12:07:57.219690 systemd[1]: Reloading... Jan 17 12:07:57.352583 zram_generator::config[3215]: No configuration found. Jan 17 12:07:57.479066 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:07:57.571192 systemd[1]: Reloading finished in 351 ms. Jan 17 12:07:57.610750 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:57.615701 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:07:57.615933 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:57.615992 systemd[1]: kubelet.service: Consumed 1.284s CPU time, 113.6M memory peak, 0B memory swap peak. Jan 17 12:07:57.620454 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:07:57.738492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:07:57.747463 (kubelet)[3279]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:07:57.798895 kubelet[3279]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:07:57.798895 kubelet[3279]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:07:57.798895 kubelet[3279]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:07:57.799331 kubelet[3279]: I0117 12:07:57.798967 3279 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:07:57.804214 kubelet[3279]: I0117 12:07:57.804077 3279 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 17 12:07:57.804214 kubelet[3279]: I0117 12:07:57.804123 3279 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:07:57.804814 kubelet[3279]: I0117 12:07:57.804683 3279 server.go:927] "Client rotation is on, will bootstrap in background" Jan 17 12:07:57.808740 kubelet[3279]: I0117 12:07:57.808701 3279 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:07:57.811201 kubelet[3279]: I0117 12:07:57.811165 3279 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:07:57.816934 kubelet[3279]: I0117 12:07:57.816861 3279 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:07:57.817130 kubelet[3279]: I0117 12:07:57.817062 3279 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:07:57.817343 kubelet[3279]: I0117 12:07:57.817089 3279 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-c8756aff3b","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:07:57.817424 kubelet[3279]: I0117 12:07:57.817355 3279 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:07:57.817424 kubelet[3279]: I0117 12:07:57.817380 3279 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:07:57.817476 kubelet[3279]: I0117 12:07:57.817430 3279 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:07:57.817590 kubelet[3279]: I0117 12:07:57.817574 3279 kubelet.go:400] "Attempting to sync node with API server" Jan 17 12:07:57.817636 kubelet[3279]: I0117 12:07:57.817594 3279 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:07:57.817636 kubelet[3279]: I0117 12:07:57.817622 3279 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:07:57.817680 kubelet[3279]: I0117 12:07:57.817640 3279 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:07:57.820203 kubelet[3279]: I0117 12:07:57.819583 3279 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:07:57.820203 kubelet[3279]: I0117 12:07:57.819845 3279 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:07:57.820388 kubelet[3279]: I0117 12:07:57.820357 3279 server.go:1264] "Started kubelet" Jan 17 12:07:57.823739 kubelet[3279]: I0117 12:07:57.823698 3279 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:07:57.831751 kubelet[3279]: I0117 12:07:57.831701 3279 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:07:57.833043 kubelet[3279]: I0117 12:07:57.833005 3279 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.835277 3279 server.go:455] "Adding debug handlers to kubelet server" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.836052 3279 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.836317 3279 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.862050 3279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.863249 3279 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.863283 3279 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.863302 3279 kubelet.go:2337] "Starting kubelet main sync loop" Jan 17 12:07:58.039148 kubelet[3279]: E0117 12:07:57.863345 3279 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.868806 3279 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.868917 3279 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:07:58.039148 kubelet[3279]: E0117 12:07:57.883122 3279 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.890924 3279 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.936069 3279 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.949072 3279 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.949142 3279 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.949285 3279 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:07:58.039148 kubelet[3279]: I0117 12:07:57.952515 3279 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.039549 kubelet[3279]: E0117 12:07:57.964154 3279 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 12:07:58.039549 kubelet[3279]: I0117 12:07:58.039082 3279 reconciler.go:26] "Reconciler: start to sync state" Jan 17 12:07:58.041121 kubelet[3279]: I0117 12:07:58.040811 3279 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.041121 kubelet[3279]: I0117 12:07:58.040958 3279 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:07:58.041121 kubelet[3279]: I0117 12:07:58.040972 3279 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:07:58.041121 kubelet[3279]: I0117 12:07:58.040993 3279 policy_none.go:49] "None policy: Start" Jan 17 12:07:58.042203 kubelet[3279]: I0117 12:07:58.041392 3279 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:07:58.043075 kubelet[3279]: I0117 12:07:58.043046 3279 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:07:58.043221 kubelet[3279]: I0117 12:07:58.043085 3279 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:07:58.043709 kubelet[3279]: I0117 12:07:58.043271 3279 state_mem.go:75] "Updated machine memory state" Jan 17 12:07:58.055138 kubelet[3279]: I0117 12:07:58.054977 3279 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:07:58.055275 kubelet[3279]: I0117 12:07:58.055191 3279 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 12:07:58.055322 kubelet[3279]: I0117 12:07:58.055299 3279 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:07:58.164647 kubelet[3279]: I0117 12:07:58.164599 3279 topology_manager.go:215] "Topology Admit Handler" podUID="ec721c75953d8e7d0a537a5c0ab69fb7" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.165012 kubelet[3279]: I0117 12:07:58.164730 3279 topology_manager.go:215] "Topology Admit Handler" podUID="3411bc22a95e172a32aaa89d5c10c5af" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.165012 kubelet[3279]: I0117 12:07:58.164772 3279 topology_manager.go:215] "Topology Admit Handler" podUID="55002162ce43a351edb74b3acb242260" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.177124 kubelet[3279]: W0117 12:07:58.177076 3279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:07:58.183626 kubelet[3279]: W0117 12:07:58.183577 3279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:07:58.184090 kubelet[3279]: W0117 12:07:58.184063 3279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:07:58.240432 kubelet[3279]: I0117 12:07:58.240316 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ec721c75953d8e7d0a537a5c0ab69fb7-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-c8756aff3b\" (UID: \"ec721c75953d8e7d0a537a5c0ab69fb7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.240432 kubelet[3279]: I0117 12:07:58.240369 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ec721c75953d8e7d0a537a5c0ab69fb7-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-c8756aff3b\" (UID: \"ec721c75953d8e7d0a537a5c0ab69fb7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.240930 kubelet[3279]: I0117 12:07:58.240718 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.240930 kubelet[3279]: I0117 12:07:58.240777 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.240930 kubelet[3279]: I0117 12:07:58.240811 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/55002162ce43a351edb74b3acb242260-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-c8756aff3b\" (UID: \"55002162ce43a351edb74b3acb242260\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.240930 kubelet[3279]: I0117 12:07:58.240840 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ec721c75953d8e7d0a537a5c0ab69fb7-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-c8756aff3b\" (UID: \"ec721c75953d8e7d0a537a5c0ab69fb7\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.240930 kubelet[3279]: I0117 12:07:58.240866 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.241375 kubelet[3279]: I0117 12:07:58.240893 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.241375 kubelet[3279]: I0117 12:07:58.240932 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3411bc22a95e172a32aaa89d5c10c5af-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-c8756aff3b\" (UID: \"3411bc22a95e172a32aaa89d5c10c5af\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:58.820365 kubelet[3279]: I0117 12:07:58.818651 3279 apiserver.go:52] "Watching apiserver" Jan 17 12:07:58.837114 kubelet[3279]: I0117 12:07:58.837030 3279 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 17 12:07:58.967197 kubelet[3279]: W0117 12:07:58.966609 3279 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:07:58.967197 kubelet[3279]: E0117 12:07:58.966677 3279 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-c8756aff3b\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-c8756aff3b" Jan 17 12:07:59.035850 kubelet[3279]: I0117 12:07:59.035640 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-c8756aff3b" podStartSLOduration=1.035618655 podStartE2EDuration="1.035618655s" podCreationTimestamp="2025-01-17 12:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:59.010786216 +0000 UTC m=+1.258716569" watchObservedRunningTime="2025-01-17 12:07:59.035618655 +0000 UTC m=+1.283549008" Jan 17 12:07:59.072559 kubelet[3279]: I0117 12:07:59.072169 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-c8756aff3b" podStartSLOduration=1.072148112 podStartE2EDuration="1.072148112s" podCreationTimestamp="2025-01-17 12:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:59.036858977 +0000 UTC m=+1.284789330" watchObservedRunningTime="2025-01-17 12:07:59.072148112 +0000 UTC m=+1.320078465" Jan 17 12:07:59.103735 kubelet[3279]: I0117 12:07:59.102647 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-c8756aff3b" podStartSLOduration=1.10260544 podStartE2EDuration="1.10260544s" podCreationTimestamp="2025-01-17 12:07:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:07:59.072373713 +0000 UTC m=+1.320304026" watchObservedRunningTime="2025-01-17 12:07:59.10260544 +0000 UTC m=+1.350535793" Jan 17 12:08:03.023943 sudo[2331]: pam_unix(sudo:session): session closed for user root Jan 17 12:08:03.118865 sshd[2328]: pam_unix(sshd:session): session closed for user core Jan 17 12:08:03.122042 systemd[1]: sshd@6-10.200.20.40:22-10.200.16.10:44516.service: Deactivated successfully. Jan 17 12:08:03.125799 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:08:03.125977 systemd[1]: session-9.scope: Consumed 6.389s CPU time, 186.2M memory peak, 0B memory swap peak. Jan 17 12:08:03.127618 systemd-logind[1690]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:08:03.128758 systemd-logind[1690]: Removed session 9. Jan 17 12:08:13.523280 kubelet[3279]: I0117 12:08:13.523243 3279 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:08:13.523724 containerd[1735]: time="2025-01-17T12:08:13.523610702Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:08:13.523904 kubelet[3279]: I0117 12:08:13.523782 3279 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:08:14.289277 kubelet[3279]: I0117 12:08:14.289229 3279 topology_manager.go:215] "Topology Admit Handler" podUID="47aa2501-d5b8-439e-9b20-796d2b34d296" podNamespace="kube-system" podName="kube-proxy-zdn7s" Jan 17 12:08:14.301089 systemd[1]: Created slice kubepods-besteffort-pod47aa2501_d5b8_439e_9b20_796d2b34d296.slice - libcontainer container kubepods-besteffort-pod47aa2501_d5b8_439e_9b20_796d2b34d296.slice. Jan 17 12:08:14.442285 kubelet[3279]: I0117 12:08:14.442236 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/47aa2501-d5b8-439e-9b20-796d2b34d296-lib-modules\") pod \"kube-proxy-zdn7s\" (UID: \"47aa2501-d5b8-439e-9b20-796d2b34d296\") " pod="kube-system/kube-proxy-zdn7s" Jan 17 12:08:14.442285 kubelet[3279]: I0117 12:08:14.442290 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/47aa2501-d5b8-439e-9b20-796d2b34d296-xtables-lock\") pod \"kube-proxy-zdn7s\" (UID: \"47aa2501-d5b8-439e-9b20-796d2b34d296\") " pod="kube-system/kube-proxy-zdn7s" Jan 17 12:08:14.442465 kubelet[3279]: I0117 12:08:14.442313 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfxc8\" (UniqueName: \"kubernetes.io/projected/47aa2501-d5b8-439e-9b20-796d2b34d296-kube-api-access-rfxc8\") pod \"kube-proxy-zdn7s\" (UID: \"47aa2501-d5b8-439e-9b20-796d2b34d296\") " pod="kube-system/kube-proxy-zdn7s" Jan 17 12:08:14.442465 kubelet[3279]: I0117 12:08:14.442338 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/47aa2501-d5b8-439e-9b20-796d2b34d296-kube-proxy\") pod \"kube-proxy-zdn7s\" (UID: \"47aa2501-d5b8-439e-9b20-796d2b34d296\") " pod="kube-system/kube-proxy-zdn7s" Jan 17 12:08:14.614252 kubelet[3279]: I0117 12:08:14.614046 3279 topology_manager.go:215] "Topology Admit Handler" podUID="536164fb-29f1-4b94-a14c-e0acd6cd51e8" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-r7s4j" Jan 17 12:08:14.615341 containerd[1735]: time="2025-01-17T12:08:14.614993073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zdn7s,Uid:47aa2501-d5b8-439e-9b20-796d2b34d296,Namespace:kube-system,Attempt:0,}" Jan 17 12:08:14.627310 systemd[1]: Created slice kubepods-besteffort-pod536164fb_29f1_4b94_a14c_e0acd6cd51e8.slice - libcontainer container kubepods-besteffort-pod536164fb_29f1_4b94_a14c_e0acd6cd51e8.slice. Jan 17 12:08:14.678619 containerd[1735]: time="2025-01-17T12:08:14.678367974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:14.678619 containerd[1735]: time="2025-01-17T12:08:14.678424494Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:14.678619 containerd[1735]: time="2025-01-17T12:08:14.678469854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:14.678803 containerd[1735]: time="2025-01-17T12:08:14.678594454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:14.694631 systemd[1]: run-containerd-runc-k8s.io-d6761e76d4c6006788e30152dbb4df7cf7ff4a5efbabfbf4f1de0cd379c247d6-runc.2GROIE.mount: Deactivated successfully. Jan 17 12:08:14.702371 systemd[1]: Started cri-containerd-d6761e76d4c6006788e30152dbb4df7cf7ff4a5efbabfbf4f1de0cd379c247d6.scope - libcontainer container d6761e76d4c6006788e30152dbb4df7cf7ff4a5efbabfbf4f1de0cd379c247d6. Jan 17 12:08:14.724464 containerd[1735]: time="2025-01-17T12:08:14.724391127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zdn7s,Uid:47aa2501-d5b8-439e-9b20-796d2b34d296,Namespace:kube-system,Attempt:0,} returns sandbox id \"d6761e76d4c6006788e30152dbb4df7cf7ff4a5efbabfbf4f1de0cd379c247d6\"" Jan 17 12:08:14.728073 containerd[1735]: time="2025-01-17T12:08:14.728013333Z" level=info msg="CreateContainer within sandbox \"d6761e76d4c6006788e30152dbb4df7cf7ff4a5efbabfbf4f1de0cd379c247d6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:08:14.744652 kubelet[3279]: I0117 12:08:14.744565 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84kj6\" (UniqueName: \"kubernetes.io/projected/536164fb-29f1-4b94-a14c-e0acd6cd51e8-kube-api-access-84kj6\") pod \"tigera-operator-7bc55997bb-r7s4j\" (UID: \"536164fb-29f1-4b94-a14c-e0acd6cd51e8\") " pod="tigera-operator/tigera-operator-7bc55997bb-r7s4j" Jan 17 12:08:14.744652 kubelet[3279]: I0117 12:08:14.744610 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/536164fb-29f1-4b94-a14c-e0acd6cd51e8-var-lib-calico\") pod \"tigera-operator-7bc55997bb-r7s4j\" (UID: \"536164fb-29f1-4b94-a14c-e0acd6cd51e8\") " pod="tigera-operator/tigera-operator-7bc55997bb-r7s4j" Jan 17 12:08:14.776675 containerd[1735]: time="2025-01-17T12:08:14.776620450Z" level=info msg="CreateContainer within sandbox \"d6761e76d4c6006788e30152dbb4df7cf7ff4a5efbabfbf4f1de0cd379c247d6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4357ae3e5f5e6069f45c83b321133bf8d67f8fbb6b0410138902de1973e8b21a\"" Jan 17 12:08:14.777514 containerd[1735]: time="2025-01-17T12:08:14.777481331Z" level=info msg="StartContainer for \"4357ae3e5f5e6069f45c83b321133bf8d67f8fbb6b0410138902de1973e8b21a\"" Jan 17 12:08:14.805313 systemd[1]: Started cri-containerd-4357ae3e5f5e6069f45c83b321133bf8d67f8fbb6b0410138902de1973e8b21a.scope - libcontainer container 4357ae3e5f5e6069f45c83b321133bf8d67f8fbb6b0410138902de1973e8b21a. Jan 17 12:08:14.836263 containerd[1735]: time="2025-01-17T12:08:14.836189784Z" level=info msg="StartContainer for \"4357ae3e5f5e6069f45c83b321133bf8d67f8fbb6b0410138902de1973e8b21a\" returns successfully" Jan 17 12:08:14.932113 containerd[1735]: time="2025-01-17T12:08:14.931945696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-r7s4j,Uid:536164fb-29f1-4b94-a14c-e0acd6cd51e8,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:08:14.976394 containerd[1735]: time="2025-01-17T12:08:14.976170246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:14.976394 containerd[1735]: time="2025-01-17T12:08:14.976236166Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:14.976394 containerd[1735]: time="2025-01-17T12:08:14.976247166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:14.977570 containerd[1735]: time="2025-01-17T12:08:14.976339327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:14.997318 systemd[1]: Started cri-containerd-7062c645630402b1411cb6a8c8b23eb93c3a883209dc349b5efb74a2d828effd.scope - libcontainer container 7062c645630402b1411cb6a8c8b23eb93c3a883209dc349b5efb74a2d828effd. Jan 17 12:08:15.033218 containerd[1735]: time="2025-01-17T12:08:15.032622256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-r7s4j,Uid:536164fb-29f1-4b94-a14c-e0acd6cd51e8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"7062c645630402b1411cb6a8c8b23eb93c3a883209dc349b5efb74a2d828effd\"" Jan 17 12:08:15.035426 containerd[1735]: time="2025-01-17T12:08:15.035317700Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:08:16.802027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861571816.mount: Deactivated successfully. Jan 17 12:08:17.169175 containerd[1735]: time="2025-01-17T12:08:17.169123526Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:17.172320 containerd[1735]: time="2025-01-17T12:08:17.172269131Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=19125944" Jan 17 12:08:17.177251 containerd[1735]: time="2025-01-17T12:08:17.177180778Z" level=info msg="ImageCreate event name:\"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:17.183136 containerd[1735]: time="2025-01-17T12:08:17.183076628Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:17.184396 containerd[1735]: time="2025-01-17T12:08:17.183843669Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"19120155\" in 2.148483409s" Jan 17 12:08:17.184396 containerd[1735]: time="2025-01-17T12:08:17.183881949Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Jan 17 12:08:17.186909 containerd[1735]: time="2025-01-17T12:08:17.186762314Z" level=info msg="CreateContainer within sandbox \"7062c645630402b1411cb6a8c8b23eb93c3a883209dc349b5efb74a2d828effd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:08:17.228243 containerd[1735]: time="2025-01-17T12:08:17.228194859Z" level=info msg="CreateContainer within sandbox \"7062c645630402b1411cb6a8c8b23eb93c3a883209dc349b5efb74a2d828effd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"242ad95e8026fb1e7aa77cff77130bf514b111fad7f75e2f2d6dd78154694767\"" Jan 17 12:08:17.228876 containerd[1735]: time="2025-01-17T12:08:17.228821020Z" level=info msg="StartContainer for \"242ad95e8026fb1e7aa77cff77130bf514b111fad7f75e2f2d6dd78154694767\"" Jan 17 12:08:17.256301 systemd[1]: Started cri-containerd-242ad95e8026fb1e7aa77cff77130bf514b111fad7f75e2f2d6dd78154694767.scope - libcontainer container 242ad95e8026fb1e7aa77cff77130bf514b111fad7f75e2f2d6dd78154694767. Jan 17 12:08:17.286618 containerd[1735]: time="2025-01-17T12:08:17.286494752Z" level=info msg="StartContainer for \"242ad95e8026fb1e7aa77cff77130bf514b111fad7f75e2f2d6dd78154694767\" returns successfully" Jan 17 12:08:17.976286 kubelet[3279]: I0117 12:08:17.975554 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zdn7s" podStartSLOduration=3.975533845 podStartE2EDuration="3.975533845s" podCreationTimestamp="2025-01-17 12:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:08:14.97212536 +0000 UTC m=+17.220055793" watchObservedRunningTime="2025-01-17 12:08:17.975533845 +0000 UTC m=+20.223464198" Jan 17 12:08:21.258955 kubelet[3279]: I0117 12:08:21.258883 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-r7s4j" podStartSLOduration=5.108577243 podStartE2EDuration="7.258864134s" podCreationTimestamp="2025-01-17 12:08:14 +0000 UTC" firstStartedPulling="2025-01-17 12:08:15.034364419 +0000 UTC m=+17.282294772" lastFinishedPulling="2025-01-17 12:08:17.18465131 +0000 UTC m=+19.432581663" observedRunningTime="2025-01-17 12:08:17.975929246 +0000 UTC m=+20.223859599" watchObservedRunningTime="2025-01-17 12:08:21.258864134 +0000 UTC m=+23.506794487" Jan 17 12:08:21.259369 kubelet[3279]: I0117 12:08:21.259067 3279 topology_manager.go:215] "Topology Admit Handler" podUID="8f55e838-a8c4-4712-91ce-6fa1b11545c0" podNamespace="calico-system" podName="calico-typha-78b99999f5-5v47d" Jan 17 12:08:21.266001 systemd[1]: Created slice kubepods-besteffort-pod8f55e838_a8c4_4712_91ce_6fa1b11545c0.slice - libcontainer container kubepods-besteffort-pod8f55e838_a8c4_4712_91ce_6fa1b11545c0.slice. Jan 17 12:08:21.282659 kubelet[3279]: I0117 12:08:21.282617 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8f55e838-a8c4-4712-91ce-6fa1b11545c0-typha-certs\") pod \"calico-typha-78b99999f5-5v47d\" (UID: \"8f55e838-a8c4-4712-91ce-6fa1b11545c0\") " pod="calico-system/calico-typha-78b99999f5-5v47d" Jan 17 12:08:21.282659 kubelet[3279]: I0117 12:08:21.282656 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f55e838-a8c4-4712-91ce-6fa1b11545c0-tigera-ca-bundle\") pod \"calico-typha-78b99999f5-5v47d\" (UID: \"8f55e838-a8c4-4712-91ce-6fa1b11545c0\") " pod="calico-system/calico-typha-78b99999f5-5v47d" Jan 17 12:08:21.282659 kubelet[3279]: I0117 12:08:21.282680 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvfck\" (UniqueName: \"kubernetes.io/projected/8f55e838-a8c4-4712-91ce-6fa1b11545c0-kube-api-access-fvfck\") pod \"calico-typha-78b99999f5-5v47d\" (UID: \"8f55e838-a8c4-4712-91ce-6fa1b11545c0\") " pod="calico-system/calico-typha-78b99999f5-5v47d" Jan 17 12:08:21.367186 kubelet[3279]: I0117 12:08:21.367042 3279 topology_manager.go:215] "Topology Admit Handler" podUID="3139c8f5-5a65-4d68-966c-f5acb20fbf7c" podNamespace="calico-system" podName="calico-node-4wt4b" Jan 17 12:08:21.381082 systemd[1]: Created slice kubepods-besteffort-pod3139c8f5_5a65_4d68_966c_f5acb20fbf7c.slice - libcontainer container kubepods-besteffort-pod3139c8f5_5a65_4d68_966c_f5acb20fbf7c.slice. Jan 17 12:08:21.384248 kubelet[3279]: I0117 12:08:21.384206 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-node-certs\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.386056 kubelet[3279]: I0117 12:08:21.385748 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-lib-modules\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.386056 kubelet[3279]: I0117 12:08:21.385831 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-policysync\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.386056 kubelet[3279]: I0117 12:08:21.385949 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-var-lib-calico\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.386056 kubelet[3279]: I0117 12:08:21.385981 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-xtables-lock\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.386240 kubelet[3279]: I0117 12:08:21.386001 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-var-run-calico\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.386240 kubelet[3279]: I0117 12:08:21.386134 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-tigera-ca-bundle\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.487206 kubelet[3279]: I0117 12:08:21.487026 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-flexvol-driver-host\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.487206 kubelet[3279]: I0117 12:08:21.487133 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-cni-bin-dir\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.487206 kubelet[3279]: I0117 12:08:21.487171 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-cni-log-dir\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.487728 kubelet[3279]: I0117 12:08:21.487297 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-287c8\" (UniqueName: \"kubernetes.io/projected/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-kube-api-access-287c8\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.487728 kubelet[3279]: I0117 12:08:21.487328 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3139c8f5-5a65-4d68-966c-f5acb20fbf7c-cni-net-dir\") pod \"calico-node-4wt4b\" (UID: \"3139c8f5-5a65-4d68-966c-f5acb20fbf7c\") " pod="calico-system/calico-node-4wt4b" Jan 17 12:08:21.573250 containerd[1735]: time="2025-01-17T12:08:21.573198757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b99999f5-5v47d,Uid:8f55e838-a8c4-4712-91ce-6fa1b11545c0,Namespace:calico-system,Attempt:0,}" Jan 17 12:08:21.584079 kubelet[3279]: I0117 12:08:21.582747 3279 topology_manager.go:215] "Topology Admit Handler" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" podNamespace="calico-system" podName="csi-node-driver-zwfv8" Jan 17 12:08:21.584079 kubelet[3279]: E0117 12:08:21.583023 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwfv8" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" Jan 17 12:08:21.589929 kubelet[3279]: I0117 12:08:21.588640 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c1a3420f-a34b-41c3-a151-733080e0373a-varrun\") pod \"csi-node-driver-zwfv8\" (UID: \"c1a3420f-a34b-41c3-a151-733080e0373a\") " pod="calico-system/csi-node-driver-zwfv8" Jan 17 12:08:21.589929 kubelet[3279]: I0117 12:08:21.588679 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c1a3420f-a34b-41c3-a151-733080e0373a-socket-dir\") pod \"csi-node-driver-zwfv8\" (UID: \"c1a3420f-a34b-41c3-a151-733080e0373a\") " pod="calico-system/csi-node-driver-zwfv8" Jan 17 12:08:21.589929 kubelet[3279]: I0117 12:08:21.588720 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c1a3420f-a34b-41c3-a151-733080e0373a-registration-dir\") pod \"csi-node-driver-zwfv8\" (UID: \"c1a3420f-a34b-41c3-a151-733080e0373a\") " pod="calico-system/csi-node-driver-zwfv8" Jan 17 12:08:21.589929 kubelet[3279]: I0117 12:08:21.588790 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c1a3420f-a34b-41c3-a151-733080e0373a-kubelet-dir\") pod \"csi-node-driver-zwfv8\" (UID: \"c1a3420f-a34b-41c3-a151-733080e0373a\") " pod="calico-system/csi-node-driver-zwfv8" Jan 17 12:08:21.589929 kubelet[3279]: I0117 12:08:21.588818 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gqvz\" (UniqueName: \"kubernetes.io/projected/c1a3420f-a34b-41c3-a151-733080e0373a-kube-api-access-5gqvz\") pod \"csi-node-driver-zwfv8\" (UID: \"c1a3420f-a34b-41c3-a151-733080e0373a\") " pod="calico-system/csi-node-driver-zwfv8" Jan 17 12:08:21.624584 kubelet[3279]: E0117 12:08:21.623978 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.624584 kubelet[3279]: W0117 12:08:21.624014 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.624584 kubelet[3279]: E0117 12:08:21.624045 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.632899 containerd[1735]: time="2025-01-17T12:08:21.632770413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:21.632899 containerd[1735]: time="2025-01-17T12:08:21.632826733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:21.632899 containerd[1735]: time="2025-01-17T12:08:21.632841653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:21.633088 containerd[1735]: time="2025-01-17T12:08:21.632943093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:21.662531 systemd[1]: Started cri-containerd-731064d75c071608bf3069af96b638bd89bbaa4a646a77705021c0f63c5ef930.scope - libcontainer container 731064d75c071608bf3069af96b638bd89bbaa4a646a77705021c0f63c5ef930. Jan 17 12:08:21.691146 kubelet[3279]: E0117 12:08:21.690245 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.691146 kubelet[3279]: W0117 12:08:21.690270 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.691146 kubelet[3279]: E0117 12:08:21.690291 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.691146 kubelet[3279]: E0117 12:08:21.690531 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.691146 kubelet[3279]: W0117 12:08:21.690541 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.691146 kubelet[3279]: E0117 12:08:21.690553 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.691146 kubelet[3279]: E0117 12:08:21.690768 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.691146 kubelet[3279]: W0117 12:08:21.690778 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.691146 kubelet[3279]: E0117 12:08:21.690789 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.691809 containerd[1735]: time="2025-01-17T12:08:21.690436585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4wt4b,Uid:3139c8f5-5a65-4d68-966c-f5acb20fbf7c,Namespace:calico-system,Attempt:0,}" Jan 17 12:08:21.691959 kubelet[3279]: E0117 12:08:21.691344 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.691959 kubelet[3279]: W0117 12:08:21.691363 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.691959 kubelet[3279]: E0117 12:08:21.691400 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.693033 kubelet[3279]: E0117 12:08:21.693014 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.693125 kubelet[3279]: W0117 12:08:21.693092 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.693276 kubelet[3279]: E0117 12:08:21.693264 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.693870 kubelet[3279]: E0117 12:08:21.693717 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.693870 kubelet[3279]: W0117 12:08:21.693732 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.693870 kubelet[3279]: E0117 12:08:21.693748 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.694813 kubelet[3279]: E0117 12:08:21.694540 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.694813 kubelet[3279]: W0117 12:08:21.694555 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.694813 kubelet[3279]: E0117 12:08:21.694571 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.695551 kubelet[3279]: E0117 12:08:21.695359 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.695551 kubelet[3279]: W0117 12:08:21.695373 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.696482 kubelet[3279]: E0117 12:08:21.695737 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.696691 kubelet[3279]: E0117 12:08:21.696625 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.696691 kubelet[3279]: W0117 12:08:21.696640 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.696814 kubelet[3279]: E0117 12:08:21.696729 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.697118 kubelet[3279]: E0117 12:08:21.696986 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.697118 kubelet[3279]: W0117 12:08:21.696998 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.697118 kubelet[3279]: E0117 12:08:21.697076 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.697524 kubelet[3279]: E0117 12:08:21.697389 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.697524 kubelet[3279]: W0117 12:08:21.697402 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.697524 kubelet[3279]: E0117 12:08:21.697447 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.697895 kubelet[3279]: E0117 12:08:21.697879 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.697994 kubelet[3279]: W0117 12:08:21.697982 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.698216 kubelet[3279]: E0117 12:08:21.698200 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.698536 kubelet[3279]: E0117 12:08:21.698453 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.698536 kubelet[3279]: W0117 12:08:21.698465 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.698704 kubelet[3279]: E0117 12:08:21.698619 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.699087 kubelet[3279]: E0117 12:08:21.698992 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.699087 kubelet[3279]: W0117 12:08:21.699007 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.699982 kubelet[3279]: E0117 12:08:21.699785 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.700605 kubelet[3279]: E0117 12:08:21.700454 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.700605 kubelet[3279]: W0117 12:08:21.700470 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.700989 kubelet[3279]: E0117 12:08:21.700863 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.701478 kubelet[3279]: E0117 12:08:21.701345 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.701737 kubelet[3279]: W0117 12:08:21.701568 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.702130 kubelet[3279]: E0117 12:08:21.701894 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.702366 kubelet[3279]: E0117 12:08:21.702342 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.702890 kubelet[3279]: W0117 12:08:21.702423 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.703007 kubelet[3279]: E0117 12:08:21.702975 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.703273 kubelet[3279]: E0117 12:08:21.703192 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.703273 kubelet[3279]: W0117 12:08:21.703205 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.703914 kubelet[3279]: E0117 12:08:21.703358 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.704137 kubelet[3279]: E0117 12:08:21.704049 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.704137 kubelet[3279]: W0117 12:08:21.704064 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.704486 kubelet[3279]: E0117 12:08:21.704394 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.705094 kubelet[3279]: E0117 12:08:21.705001 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.705094 kubelet[3279]: W0117 12:08:21.705015 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.705656 kubelet[3279]: E0117 12:08:21.705334 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.706043 kubelet[3279]: E0117 12:08:21.706014 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.706411 kubelet[3279]: W0117 12:08:21.706212 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.707070 kubelet[3279]: E0117 12:08:21.706820 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.707461 kubelet[3279]: E0117 12:08:21.707446 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.707829 kubelet[3279]: W0117 12:08:21.707644 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.708044 kubelet[3279]: E0117 12:08:21.707898 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.710223 kubelet[3279]: E0117 12:08:21.710089 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.710976 kubelet[3279]: W0117 12:08:21.710314 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.712573 kubelet[3279]: E0117 12:08:21.712135 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.712573 kubelet[3279]: W0117 12:08:21.712150 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.712573 kubelet[3279]: E0117 12:08:21.712166 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.712688 kubelet[3279]: E0117 12:08:21.712584 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.713173 kubelet[3279]: E0117 12:08:21.712900 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.713173 kubelet[3279]: W0117 12:08:21.712918 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.713173 kubelet[3279]: E0117 12:08:21.712932 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.733609 kubelet[3279]: E0117 12:08:21.733226 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:21.733609 kubelet[3279]: W0117 12:08:21.733533 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:21.733609 kubelet[3279]: E0117 12:08:21.733565 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:21.749996 containerd[1735]: time="2025-01-17T12:08:21.749860601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:21.749996 containerd[1735]: time="2025-01-17T12:08:21.749927241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:21.749996 containerd[1735]: time="2025-01-17T12:08:21.749943121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:21.751950 containerd[1735]: time="2025-01-17T12:08:21.751159563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:21.752463 containerd[1735]: time="2025-01-17T12:08:21.752244525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-78b99999f5-5v47d,Uid:8f55e838-a8c4-4712-91ce-6fa1b11545c0,Namespace:calico-system,Attempt:0,} returns sandbox id \"731064d75c071608bf3069af96b638bd89bbaa4a646a77705021c0f63c5ef930\"" Jan 17 12:08:21.758040 containerd[1735]: time="2025-01-17T12:08:21.758005534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:08:21.775310 systemd[1]: Started cri-containerd-8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3.scope - libcontainer container 8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3. Jan 17 12:08:21.822033 containerd[1735]: time="2025-01-17T12:08:21.821849997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4wt4b,Uid:3139c8f5-5a65-4d68-966c-f5acb20fbf7c,Namespace:calico-system,Attempt:0,} returns sandbox id \"8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3\"" Jan 17 12:08:23.164715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1791214442.mount: Deactivated successfully. Jan 17 12:08:23.845863 containerd[1735]: time="2025-01-17T12:08:23.845813054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:23.848963 containerd[1735]: time="2025-01-17T12:08:23.848928939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29231308" Jan 17 12:08:23.852258 containerd[1735]: time="2025-01-17T12:08:23.852206464Z" level=info msg="ImageCreate event name:\"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:23.857581 containerd[1735]: time="2025-01-17T12:08:23.857507593Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:23.858523 containerd[1735]: time="2025-01-17T12:08:23.858335434Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"29231162\" in 2.10011194s" Jan 17 12:08:23.858523 containerd[1735]: time="2025-01-17T12:08:23.858381674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Jan 17 12:08:23.859938 containerd[1735]: time="2025-01-17T12:08:23.859706476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:08:23.864131 kubelet[3279]: E0117 12:08:23.864080 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwfv8" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" Jan 17 12:08:23.875666 containerd[1735]: time="2025-01-17T12:08:23.875036701Z" level=info msg="CreateContainer within sandbox \"731064d75c071608bf3069af96b638bd89bbaa4a646a77705021c0f63c5ef930\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:08:23.925825 containerd[1735]: time="2025-01-17T12:08:23.925780463Z" level=info msg="CreateContainer within sandbox \"731064d75c071608bf3069af96b638bd89bbaa4a646a77705021c0f63c5ef930\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c765d588ea8b9260df82900e6f24474e30d925dd7a1d1c0558b6c314aacf3aaa\"" Jan 17 12:08:23.927945 containerd[1735]: time="2025-01-17T12:08:23.927131585Z" level=info msg="StartContainer for \"c765d588ea8b9260df82900e6f24474e30d925dd7a1d1c0558b6c314aacf3aaa\"" Jan 17 12:08:23.959302 systemd[1]: Started cri-containerd-c765d588ea8b9260df82900e6f24474e30d925dd7a1d1c0558b6c314aacf3aaa.scope - libcontainer container c765d588ea8b9260df82900e6f24474e30d925dd7a1d1c0558b6c314aacf3aaa. Jan 17 12:08:23.998230 containerd[1735]: time="2025-01-17T12:08:23.998177979Z" level=info msg="StartContainer for \"c765d588ea8b9260df82900e6f24474e30d925dd7a1d1c0558b6c314aacf3aaa\" returns successfully" Jan 17 12:08:24.997348 kubelet[3279]: I0117 12:08:24.997280 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-78b99999f5-5v47d" podStartSLOduration=1.893664762 podStartE2EDuration="3.997264187s" podCreationTimestamp="2025-01-17 12:08:21 +0000 UTC" firstStartedPulling="2025-01-17 12:08:21.755914371 +0000 UTC m=+24.003844724" lastFinishedPulling="2025-01-17 12:08:23.859513796 +0000 UTC m=+26.107444149" observedRunningTime="2025-01-17 12:08:24.996939826 +0000 UTC m=+27.244870179" watchObservedRunningTime="2025-01-17 12:08:24.997264187 +0000 UTC m=+27.245194540" Jan 17 12:08:25.009146 kubelet[3279]: E0117 12:08:25.009110 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.009146 kubelet[3279]: W0117 12:08:25.009137 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.009307 kubelet[3279]: E0117 12:08:25.009159 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.009383 kubelet[3279]: E0117 12:08:25.009362 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.009383 kubelet[3279]: W0117 12:08:25.009379 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.009434 kubelet[3279]: E0117 12:08:25.009389 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.009655 kubelet[3279]: E0117 12:08:25.009636 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.009655 kubelet[3279]: W0117 12:08:25.009651 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.009727 kubelet[3279]: E0117 12:08:25.009662 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.009869 kubelet[3279]: E0117 12:08:25.009851 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.009869 kubelet[3279]: W0117 12:08:25.009866 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.009919 kubelet[3279]: E0117 12:08:25.009875 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.010062 kubelet[3279]: E0117 12:08:25.010046 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.010120 kubelet[3279]: W0117 12:08:25.010064 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.010120 kubelet[3279]: E0117 12:08:25.010074 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.010263 kubelet[3279]: E0117 12:08:25.010245 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.010263 kubelet[3279]: W0117 12:08:25.010260 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.010311 kubelet[3279]: E0117 12:08:25.010270 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.010457 kubelet[3279]: E0117 12:08:25.010441 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.010457 kubelet[3279]: W0117 12:08:25.010457 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.010521 kubelet[3279]: E0117 12:08:25.010466 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.010643 kubelet[3279]: E0117 12:08:25.010618 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.010643 kubelet[3279]: W0117 12:08:25.010633 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.010692 kubelet[3279]: E0117 12:08:25.010643 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.010849 kubelet[3279]: E0117 12:08:25.010826 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.010849 kubelet[3279]: W0117 12:08:25.010842 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.010910 kubelet[3279]: E0117 12:08:25.010851 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.011053 kubelet[3279]: E0117 12:08:25.011034 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.011082 kubelet[3279]: W0117 12:08:25.011049 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.011082 kubelet[3279]: E0117 12:08:25.011078 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.011302 kubelet[3279]: E0117 12:08:25.011284 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.011302 kubelet[3279]: W0117 12:08:25.011298 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.011369 kubelet[3279]: E0117 12:08:25.011307 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.011504 kubelet[3279]: E0117 12:08:25.011486 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.011504 kubelet[3279]: W0117 12:08:25.011501 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.011634 kubelet[3279]: E0117 12:08:25.011510 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.011829 kubelet[3279]: E0117 12:08:25.011806 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.011829 kubelet[3279]: W0117 12:08:25.011823 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.011959 kubelet[3279]: E0117 12:08:25.011834 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.012257 kubelet[3279]: E0117 12:08:25.012236 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.012257 kubelet[3279]: W0117 12:08:25.012253 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.012336 kubelet[3279]: E0117 12:08:25.012264 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.012463 kubelet[3279]: E0117 12:08:25.012443 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.012463 kubelet[3279]: W0117 12:08:25.012459 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.012517 kubelet[3279]: E0117 12:08:25.012469 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.020830 kubelet[3279]: E0117 12:08:25.020805 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.020830 kubelet[3279]: W0117 12:08:25.020825 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.020952 kubelet[3279]: E0117 12:08:25.020840 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.021175 kubelet[3279]: E0117 12:08:25.021037 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.021175 kubelet[3279]: W0117 12:08:25.021051 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.021175 kubelet[3279]: E0117 12:08:25.021060 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.021617 kubelet[3279]: E0117 12:08:25.021336 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.021617 kubelet[3279]: W0117 12:08:25.021353 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.021617 kubelet[3279]: E0117 12:08:25.021374 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.022121 kubelet[3279]: E0117 12:08:25.022085 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.022342 kubelet[3279]: W0117 12:08:25.022202 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.022342 kubelet[3279]: E0117 12:08:25.022246 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.022570 kubelet[3279]: E0117 12:08:25.022468 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.022570 kubelet[3279]: W0117 12:08:25.022481 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.022642 kubelet[3279]: E0117 12:08:25.022576 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.022801 kubelet[3279]: E0117 12:08:25.022787 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.022945 kubelet[3279]: W0117 12:08:25.022855 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.022945 kubelet[3279]: E0117 12:08:25.022886 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.023123 kubelet[3279]: E0117 12:08:25.023082 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.023287 kubelet[3279]: W0117 12:08:25.023097 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.023287 kubelet[3279]: E0117 12:08:25.023225 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.023709 kubelet[3279]: E0117 12:08:25.023506 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.023709 kubelet[3279]: W0117 12:08:25.023521 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.023709 kubelet[3279]: E0117 12:08:25.023540 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.023822 kubelet[3279]: E0117 12:08:25.023787 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.023822 kubelet[3279]: W0117 12:08:25.023799 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.023822 kubelet[3279]: E0117 12:08:25.023819 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.024014 kubelet[3279]: E0117 12:08:25.023998 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.024014 kubelet[3279]: W0117 12:08:25.024013 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.024080 kubelet[3279]: E0117 12:08:25.024028 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.024224 kubelet[3279]: E0117 12:08:25.024207 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.024224 kubelet[3279]: W0117 12:08:25.024222 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.024285 kubelet[3279]: E0117 12:08:25.024240 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.024495 kubelet[3279]: E0117 12:08:25.024478 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.024495 kubelet[3279]: W0117 12:08:25.024494 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.024589 kubelet[3279]: E0117 12:08:25.024572 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.024864 kubelet[3279]: E0117 12:08:25.024842 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.024864 kubelet[3279]: W0117 12:08:25.024861 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.025169 kubelet[3279]: E0117 12:08:25.025147 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.025221 kubelet[3279]: E0117 12:08:25.025210 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.025221 kubelet[3279]: W0117 12:08:25.025216 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.025278 kubelet[3279]: E0117 12:08:25.025232 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.025434 kubelet[3279]: E0117 12:08:25.025415 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.025434 kubelet[3279]: W0117 12:08:25.025429 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.025487 kubelet[3279]: E0117 12:08:25.025443 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.025611 kubelet[3279]: E0117 12:08:25.025597 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.025611 kubelet[3279]: W0117 12:08:25.025609 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.025668 kubelet[3279]: E0117 12:08:25.025618 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.025785 kubelet[3279]: E0117 12:08:25.025771 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.025785 kubelet[3279]: W0117 12:08:25.025783 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.025839 kubelet[3279]: E0117 12:08:25.025827 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.026217 kubelet[3279]: E0117 12:08:25.026200 3279 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:08:25.026217 kubelet[3279]: W0117 12:08:25.026215 3279 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:08:25.026290 kubelet[3279]: E0117 12:08:25.026224 3279 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:08:25.361139 containerd[1735]: time="2025-01-17T12:08:25.360330491Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:25.364322 containerd[1735]: time="2025-01-17T12:08:25.364250017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5117811" Jan 17 12:08:25.369610 containerd[1735]: time="2025-01-17T12:08:25.367700383Z" level=info msg="ImageCreate event name:\"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:25.377706 containerd[1735]: time="2025-01-17T12:08:25.377007478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:25.378313 containerd[1735]: time="2025-01-17T12:08:25.377645479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6487425\" in 1.517899003s" Jan 17 12:08:25.378313 containerd[1735]: time="2025-01-17T12:08:25.377987840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Jan 17 12:08:25.383047 containerd[1735]: time="2025-01-17T12:08:25.383003848Z" level=info msg="CreateContainer within sandbox \"8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:08:25.431000 containerd[1735]: time="2025-01-17T12:08:25.430892845Z" level=info msg="CreateContainer within sandbox \"8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23\"" Jan 17 12:08:25.432820 containerd[1735]: time="2025-01-17T12:08:25.431417526Z" level=info msg="StartContainer for \"d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23\"" Jan 17 12:08:25.465286 systemd[1]: Started cri-containerd-d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23.scope - libcontainer container d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23. Jan 17 12:08:25.493121 containerd[1735]: time="2025-01-17T12:08:25.493051505Z" level=info msg="StartContainer for \"d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23\" returns successfully" Jan 17 12:08:25.502582 systemd[1]: cri-containerd-d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23.scope: Deactivated successfully. Jan 17 12:08:25.528753 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23-rootfs.mount: Deactivated successfully. Jan 17 12:08:25.865853 kubelet[3279]: E0117 12:08:25.864868 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwfv8" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" Jan 17 12:08:25.986388 kubelet[3279]: I0117 12:08:25.986354 3279 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:08:26.383412 containerd[1735]: time="2025-01-17T12:08:26.383131697Z" level=info msg="shim disconnected" id=d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23 namespace=k8s.io Jan 17 12:08:26.383412 containerd[1735]: time="2025-01-17T12:08:26.383340497Z" level=warning msg="cleaning up after shim disconnected" id=d937a14d80687e0c876f711af273a0dddcd56d85a2afdb8b8a38caf570016b23 namespace=k8s.io Jan 17 12:08:26.383941 containerd[1735]: time="2025-01-17T12:08:26.383420658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:08:26.990989 containerd[1735]: time="2025-01-17T12:08:26.990860515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:08:27.864156 kubelet[3279]: E0117 12:08:27.863837 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwfv8" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" Jan 17 12:08:29.864614 kubelet[3279]: E0117 12:08:29.864264 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwfv8" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" Jan 17 12:08:31.087681 containerd[1735]: time="2025-01-17T12:08:31.087602758Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:31.093572 containerd[1735]: time="2025-01-17T12:08:31.093518448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=89703123" Jan 17 12:08:31.097116 containerd[1735]: time="2025-01-17T12:08:31.097035334Z" level=info msg="ImageCreate event name:\"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:31.102578 containerd[1735]: time="2025-01-17T12:08:31.102438103Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:31.104280 containerd[1735]: time="2025-01-17T12:08:31.103544984Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"91072777\" in 4.112565029s" Jan 17 12:08:31.104280 containerd[1735]: time="2025-01-17T12:08:31.103591145Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Jan 17 12:08:31.108341 containerd[1735]: time="2025-01-17T12:08:31.108298112Z" level=info msg="CreateContainer within sandbox \"8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:08:31.151391 containerd[1735]: time="2025-01-17T12:08:31.151254944Z" level=info msg="CreateContainer within sandbox \"8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453\"" Jan 17 12:08:31.152560 containerd[1735]: time="2025-01-17T12:08:31.152046865Z" level=info msg="StartContainer for \"634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453\"" Jan 17 12:08:31.189355 systemd[1]: Started cri-containerd-634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453.scope - libcontainer container 634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453. Jan 17 12:08:31.222985 containerd[1735]: time="2025-01-17T12:08:31.222930143Z" level=info msg="StartContainer for \"634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453\" returns successfully" Jan 17 12:08:31.865367 kubelet[3279]: E0117 12:08:31.864440 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zwfv8" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" Jan 17 12:08:32.669037 containerd[1735]: time="2025-01-17T12:08:32.668986108Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:08:32.671790 systemd[1]: cri-containerd-634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453.scope: Deactivated successfully. Jan 17 12:08:32.692359 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453-rootfs.mount: Deactivated successfully. Jan 17 12:08:32.698018 kubelet[3279]: I0117 12:08:32.695896 3279 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:08:32.736131 kubelet[3279]: I0117 12:08:32.735883 3279 topology_manager.go:215] "Topology Admit Handler" podUID="a87b1eb5-ab2f-4531-8fae-234c482d801e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-cskls" Jan 17 12:08:32.885839 kubelet[3279]: I0117 12:08:32.750062 3279 topology_manager.go:215] "Topology Admit Handler" podUID="9ce371f8-6664-4b2d-8bfc-dc0423d17dd2" podNamespace="calico-apiserver" podName="calico-apiserver-85c7c4654d-dvhbm" Jan 17 12:08:32.885839 kubelet[3279]: I0117 12:08:32.752072 3279 topology_manager.go:215] "Topology Admit Handler" podUID="ce3e0252-15d5-43de-ba60-d0523e069f90" podNamespace="kube-system" podName="coredns-7db6d8ff4d-qn5wj" Jan 17 12:08:32.885839 kubelet[3279]: I0117 12:08:32.753164 3279 topology_manager.go:215] "Topology Admit Handler" podUID="054e26d4-3078-4a7b-9cfd-e882b8b74093" podNamespace="calico-apiserver" podName="calico-apiserver-85c7c4654d-9j4zk" Jan 17 12:08:32.885839 kubelet[3279]: I0117 12:08:32.754169 3279 topology_manager.go:215] "Topology Admit Handler" podUID="f694ce58-2a80-406a-a332-7d2c145777d9" podNamespace="calico-system" podName="calico-kube-controllers-7f67964b8b-rrbh6" Jan 17 12:08:32.885839 kubelet[3279]: I0117 12:08:32.780412 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjq5k\" (UniqueName: \"kubernetes.io/projected/ce3e0252-15d5-43de-ba60-d0523e069f90-kube-api-access-kjq5k\") pod \"coredns-7db6d8ff4d-qn5wj\" (UID: \"ce3e0252-15d5-43de-ba60-d0523e069f90\") " pod="kube-system/coredns-7db6d8ff4d-qn5wj" Jan 17 12:08:32.885839 kubelet[3279]: I0117 12:08:32.780447 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f694ce58-2a80-406a-a332-7d2c145777d9-tigera-ca-bundle\") pod \"calico-kube-controllers-7f67964b8b-rrbh6\" (UID: \"f694ce58-2a80-406a-a332-7d2c145777d9\") " pod="calico-system/calico-kube-controllers-7f67964b8b-rrbh6" Jan 17 12:08:32.885839 kubelet[3279]: I0117 12:08:32.780469 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq2gt\" (UniqueName: \"kubernetes.io/projected/f694ce58-2a80-406a-a332-7d2c145777d9-kube-api-access-jq2gt\") pod \"calico-kube-controllers-7f67964b8b-rrbh6\" (UID: \"f694ce58-2a80-406a-a332-7d2c145777d9\") " pod="calico-system/calico-kube-controllers-7f67964b8b-rrbh6" Jan 17 12:08:32.744926 systemd[1]: Created slice kubepods-burstable-poda87b1eb5_ab2f_4531_8fae_234c482d801e.slice - libcontainer container kubepods-burstable-poda87b1eb5_ab2f_4531_8fae_234c482d801e.slice. Jan 17 12:08:32.886442 kubelet[3279]: I0117 12:08:32.780506 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a87b1eb5-ab2f-4531-8fae-234c482d801e-config-volume\") pod \"coredns-7db6d8ff4d-cskls\" (UID: \"a87b1eb5-ab2f-4531-8fae-234c482d801e\") " pod="kube-system/coredns-7db6d8ff4d-cskls" Jan 17 12:08:32.886442 kubelet[3279]: I0117 12:08:32.780524 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce3e0252-15d5-43de-ba60-d0523e069f90-config-volume\") pod \"coredns-7db6d8ff4d-qn5wj\" (UID: \"ce3e0252-15d5-43de-ba60-d0523e069f90\") " pod="kube-system/coredns-7db6d8ff4d-qn5wj" Jan 17 12:08:32.886442 kubelet[3279]: I0117 12:08:32.780542 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv6ft\" (UniqueName: \"kubernetes.io/projected/9ce371f8-6664-4b2d-8bfc-dc0423d17dd2-kube-api-access-xv6ft\") pod \"calico-apiserver-85c7c4654d-dvhbm\" (UID: \"9ce371f8-6664-4b2d-8bfc-dc0423d17dd2\") " pod="calico-apiserver/calico-apiserver-85c7c4654d-dvhbm" Jan 17 12:08:32.886442 kubelet[3279]: I0117 12:08:32.780560 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk67s\" (UniqueName: \"kubernetes.io/projected/054e26d4-3078-4a7b-9cfd-e882b8b74093-kube-api-access-rk67s\") pod \"calico-apiserver-85c7c4654d-9j4zk\" (UID: \"054e26d4-3078-4a7b-9cfd-e882b8b74093\") " pod="calico-apiserver/calico-apiserver-85c7c4654d-9j4zk" Jan 17 12:08:32.886442 kubelet[3279]: I0117 12:08:32.780631 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76mfw\" (UniqueName: \"kubernetes.io/projected/a87b1eb5-ab2f-4531-8fae-234c482d801e-kube-api-access-76mfw\") pod \"coredns-7db6d8ff4d-cskls\" (UID: \"a87b1eb5-ab2f-4531-8fae-234c482d801e\") " pod="kube-system/coredns-7db6d8ff4d-cskls" Jan 17 12:08:32.763254 systemd[1]: Created slice kubepods-besteffort-pod9ce371f8_6664_4b2d_8bfc_dc0423d17dd2.slice - libcontainer container kubepods-besteffort-pod9ce371f8_6664_4b2d_8bfc_dc0423d17dd2.slice. Jan 17 12:08:32.886604 kubelet[3279]: I0117 12:08:32.780654 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/054e26d4-3078-4a7b-9cfd-e882b8b74093-calico-apiserver-certs\") pod \"calico-apiserver-85c7c4654d-9j4zk\" (UID: \"054e26d4-3078-4a7b-9cfd-e882b8b74093\") " pod="calico-apiserver/calico-apiserver-85c7c4654d-9j4zk" Jan 17 12:08:32.886604 kubelet[3279]: I0117 12:08:32.780671 3279 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9ce371f8-6664-4b2d-8bfc-dc0423d17dd2-calico-apiserver-certs\") pod \"calico-apiserver-85c7c4654d-dvhbm\" (UID: \"9ce371f8-6664-4b2d-8bfc-dc0423d17dd2\") " pod="calico-apiserver/calico-apiserver-85c7c4654d-dvhbm" Jan 17 12:08:32.773836 systemd[1]: Created slice kubepods-burstable-podce3e0252_15d5_43de_ba60_d0523e069f90.slice - libcontainer container kubepods-burstable-podce3e0252_15d5_43de_ba60_d0523e069f90.slice. Jan 17 12:08:32.782954 systemd[1]: Created slice kubepods-besteffort-pod054e26d4_3078_4a7b_9cfd_e882b8b74093.slice - libcontainer container kubepods-besteffort-pod054e26d4_3078_4a7b_9cfd_e882b8b74093.slice. Jan 17 12:08:32.791540 systemd[1]: Created slice kubepods-besteffort-podf694ce58_2a80_406a_a332_7d2c145777d9.slice - libcontainer container kubepods-besteffort-podf694ce58_2a80_406a_a332_7d2c145777d9.slice. Jan 17 12:08:33.186952 containerd[1735]: time="2025-01-17T12:08:33.186590449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cskls,Uid:a87b1eb5-ab2f-4531-8fae-234c482d801e,Namespace:kube-system,Attempt:0,}" Jan 17 12:08:33.197595 containerd[1735]: time="2025-01-17T12:08:33.197202507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c7c4654d-dvhbm,Uid:9ce371f8-6664-4b2d-8bfc-dc0423d17dd2,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:08:33.197595 containerd[1735]: time="2025-01-17T12:08:33.197465948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qn5wj,Uid:ce3e0252-15d5-43de-ba60-d0523e069f90,Namespace:kube-system,Attempt:0,}" Jan 17 12:08:33.198349 containerd[1735]: time="2025-01-17T12:08:33.198302829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f67964b8b-rrbh6,Uid:f694ce58-2a80-406a-a332-7d2c145777d9,Namespace:calico-system,Attempt:0,}" Jan 17 12:08:33.198599 containerd[1735]: time="2025-01-17T12:08:33.198574709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c7c4654d-9j4zk,Uid:054e26d4-3078-4a7b-9cfd-e882b8b74093,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:08:33.406898 containerd[1735]: time="2025-01-17T12:08:33.406822896Z" level=info msg="shim disconnected" id=634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453 namespace=k8s.io Jan 17 12:08:33.406898 containerd[1735]: time="2025-01-17T12:08:33.406878896Z" level=warning msg="cleaning up after shim disconnected" id=634573dc7553d391733a0abc2d1e3e83157601f80c9fcfc2396f4c8c1f628453 namespace=k8s.io Jan 17 12:08:33.406898 containerd[1735]: time="2025-01-17T12:08:33.406887136Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:08:33.641043 containerd[1735]: time="2025-01-17T12:08:33.640580685Z" level=error msg="Failed to destroy network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.643988 containerd[1735]: time="2025-01-17T12:08:33.643593410Z" level=error msg="encountered an error cleaning up failed sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.645019 containerd[1735]: time="2025-01-17T12:08:33.643928970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f67964b8b-rrbh6,Uid:f694ce58-2a80-406a-a332-7d2c145777d9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.648137 kubelet[3279]: E0117 12:08:33.646193 3279 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.648137 kubelet[3279]: E0117 12:08:33.646289 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f67964b8b-rrbh6" Jan 17 12:08:33.648137 kubelet[3279]: E0117 12:08:33.646310 3279 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f67964b8b-rrbh6" Jan 17 12:08:33.648318 kubelet[3279]: E0117 12:08:33.646350 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f67964b8b-rrbh6_calico-system(f694ce58-2a80-406a-a332-7d2c145777d9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f67964b8b-rrbh6_calico-system(f694ce58-2a80-406a-a332-7d2c145777d9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f67964b8b-rrbh6" podUID="f694ce58-2a80-406a-a332-7d2c145777d9" Jan 17 12:08:33.669912 containerd[1735]: time="2025-01-17T12:08:33.669853213Z" level=error msg="Failed to destroy network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.671364 containerd[1735]: time="2025-01-17T12:08:33.671307256Z" level=error msg="encountered an error cleaning up failed sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.671712 containerd[1735]: time="2025-01-17T12:08:33.671592696Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c7c4654d-dvhbm,Uid:9ce371f8-6664-4b2d-8bfc-dc0423d17dd2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.672540 kubelet[3279]: E0117 12:08:33.672072 3279 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.672540 kubelet[3279]: E0117 12:08:33.672183 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85c7c4654d-dvhbm" Jan 17 12:08:33.672540 kubelet[3279]: E0117 12:08:33.672214 3279 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85c7c4654d-dvhbm" Jan 17 12:08:33.674045 kubelet[3279]: E0117 12:08:33.672263 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85c7c4654d-dvhbm_calico-apiserver(9ce371f8-6664-4b2d-8bfc-dc0423d17dd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85c7c4654d-dvhbm_calico-apiserver(9ce371f8-6664-4b2d-8bfc-dc0423d17dd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85c7c4654d-dvhbm" podUID="9ce371f8-6664-4b2d-8bfc-dc0423d17dd2" Jan 17 12:08:33.678396 containerd[1735]: time="2025-01-17T12:08:33.678211907Z" level=error msg="Failed to destroy network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.678608 containerd[1735]: time="2025-01-17T12:08:33.678539868Z" level=error msg="encountered an error cleaning up failed sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.678608 containerd[1735]: time="2025-01-17T12:08:33.678597068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c7c4654d-9j4zk,Uid:054e26d4-3078-4a7b-9cfd-e882b8b74093,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.679212 kubelet[3279]: E0117 12:08:33.678990 3279 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.679212 kubelet[3279]: E0117 12:08:33.679056 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85c7c4654d-9j4zk" Jan 17 12:08:33.679212 kubelet[3279]: E0117 12:08:33.679074 3279 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85c7c4654d-9j4zk" Jan 17 12:08:33.679660 kubelet[3279]: E0117 12:08:33.679128 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85c7c4654d-9j4zk_calico-apiserver(054e26d4-3078-4a7b-9cfd-e882b8b74093)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85c7c4654d-9j4zk_calico-apiserver(054e26d4-3078-4a7b-9cfd-e882b8b74093)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85c7c4654d-9j4zk" podUID="054e26d4-3078-4a7b-9cfd-e882b8b74093" Jan 17 12:08:33.681580 containerd[1735]: time="2025-01-17T12:08:33.681276072Z" level=error msg="Failed to destroy network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.682034 containerd[1735]: time="2025-01-17T12:08:33.681972793Z" level=error msg="encountered an error cleaning up failed sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.683403 containerd[1735]: time="2025-01-17T12:08:33.682082634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cskls,Uid:a87b1eb5-ab2f-4531-8fae-234c482d801e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.685441 kubelet[3279]: E0117 12:08:33.684633 3279 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.685441 kubelet[3279]: E0117 12:08:33.684686 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cskls" Jan 17 12:08:33.685441 kubelet[3279]: E0117 12:08:33.684706 3279 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-cskls" Jan 17 12:08:33.685686 containerd[1735]: time="2025-01-17T12:08:33.684661998Z" level=error msg="Failed to destroy network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.685743 kubelet[3279]: E0117 12:08:33.684740 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-cskls_kube-system(a87b1eb5-ab2f-4531-8fae-234c482d801e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-cskls_kube-system(a87b1eb5-ab2f-4531-8fae-234c482d801e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cskls" podUID="a87b1eb5-ab2f-4531-8fae-234c482d801e" Jan 17 12:08:33.687020 containerd[1735]: time="2025-01-17T12:08:33.686903522Z" level=error msg="encountered an error cleaning up failed sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.687286 containerd[1735]: time="2025-01-17T12:08:33.687183002Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qn5wj,Uid:ce3e0252-15d5-43de-ba60-d0523e069f90,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.688311 kubelet[3279]: E0117 12:08:33.688260 3279 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.688435 kubelet[3279]: E0117 12:08:33.688324 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qn5wj" Jan 17 12:08:33.688435 kubelet[3279]: E0117 12:08:33.688350 3279 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-qn5wj" Jan 17 12:08:33.688791 kubelet[3279]: E0117 12:08:33.688531 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-qn5wj_kube-system(ce3e0252-15d5-43de-ba60-d0523e069f90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-qn5wj_kube-system(ce3e0252-15d5-43de-ba60-d0523e069f90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qn5wj" podUID="ce3e0252-15d5-43de-ba60-d0523e069f90" Jan 17 12:08:33.870041 systemd[1]: Created slice kubepods-besteffort-podc1a3420f_a34b_41c3_a151_733080e0373a.slice - libcontainer container kubepods-besteffort-podc1a3420f_a34b_41c3_a151_733080e0373a.slice. Jan 17 12:08:33.873022 containerd[1735]: time="2025-01-17T12:08:33.872902991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zwfv8,Uid:c1a3420f-a34b-41c3-a151-733080e0373a,Namespace:calico-system,Attempt:0,}" Jan 17 12:08:33.954058 containerd[1735]: time="2025-01-17T12:08:33.953846726Z" level=error msg="Failed to destroy network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.956042 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1-shm.mount: Deactivated successfully. Jan 17 12:08:33.956667 containerd[1735]: time="2025-01-17T12:08:33.956304690Z" level=error msg="encountered an error cleaning up failed sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.956667 containerd[1735]: time="2025-01-17T12:08:33.956394410Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zwfv8,Uid:c1a3420f-a34b-41c3-a151-733080e0373a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.957638 kubelet[3279]: E0117 12:08:33.957077 3279 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:33.957638 kubelet[3279]: E0117 12:08:33.957165 3279 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zwfv8" Jan 17 12:08:33.957638 kubelet[3279]: E0117 12:08:33.957186 3279 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zwfv8" Jan 17 12:08:33.959396 kubelet[3279]: E0117 12:08:33.957233 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zwfv8_calico-system(c1a3420f-a34b-41c3-a151-733080e0373a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zwfv8_calico-system(c1a3420f-a34b-41c3-a151-733080e0373a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zwfv8" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" Jan 17 12:08:34.008011 kubelet[3279]: I0117 12:08:34.007369 3279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:34.009501 containerd[1735]: time="2025-01-17T12:08:34.008220296Z" level=info msg="StopPodSandbox for \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\"" Jan 17 12:08:34.009501 containerd[1735]: time="2025-01-17T12:08:34.008389616Z" level=info msg="Ensure that sandbox 16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7 in task-service has been cleanup successfully" Jan 17 12:08:34.010583 kubelet[3279]: I0117 12:08:34.010540 3279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:34.011385 containerd[1735]: time="2025-01-17T12:08:34.011087981Z" level=info msg="StopPodSandbox for \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\"" Jan 17 12:08:34.012011 containerd[1735]: time="2025-01-17T12:08:34.011942782Z" level=info msg="Ensure that sandbox 7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495 in task-service has been cleanup successfully" Jan 17 12:08:34.012437 kubelet[3279]: I0117 12:08:34.012414 3279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:34.014688 containerd[1735]: time="2025-01-17T12:08:34.014594747Z" level=info msg="StopPodSandbox for \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\"" Jan 17 12:08:34.016081 containerd[1735]: time="2025-01-17T12:08:34.015053747Z" level=info msg="Ensure that sandbox 9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c in task-service has been cleanup successfully" Jan 17 12:08:34.022862 containerd[1735]: time="2025-01-17T12:08:34.022775360Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:08:34.043257 kubelet[3279]: I0117 12:08:34.041416 3279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:34.048266 containerd[1735]: time="2025-01-17T12:08:34.048215643Z" level=info msg="StopPodSandbox for \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\"" Jan 17 12:08:34.048437 containerd[1735]: time="2025-01-17T12:08:34.048413123Z" level=info msg="Ensure that sandbox e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1 in task-service has been cleanup successfully" Jan 17 12:08:34.057829 kubelet[3279]: I0117 12:08:34.057798 3279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:34.064181 containerd[1735]: time="2025-01-17T12:08:34.063987349Z" level=info msg="StopPodSandbox for \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\"" Jan 17 12:08:34.066478 containerd[1735]: time="2025-01-17T12:08:34.066423433Z" level=info msg="Ensure that sandbox 9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295 in task-service has been cleanup successfully" Jan 17 12:08:34.072927 kubelet[3279]: I0117 12:08:34.072169 3279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:34.075050 containerd[1735]: time="2025-01-17T12:08:34.074922167Z" level=info msg="StopPodSandbox for \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\"" Jan 17 12:08:34.075671 containerd[1735]: time="2025-01-17T12:08:34.075642608Z" level=info msg="Ensure that sandbox 8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6 in task-service has been cleanup successfully" Jan 17 12:08:34.115175 containerd[1735]: time="2025-01-17T12:08:34.115126034Z" level=error msg="StopPodSandbox for \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\" failed" error="failed to destroy network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:34.115719 kubelet[3279]: E0117 12:08:34.115681 3279 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:34.115910 kubelet[3279]: E0117 12:08:34.115861 3279 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6"} Jan 17 12:08:34.116507 kubelet[3279]: E0117 12:08:34.116438 3279 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a87b1eb5-ab2f-4531-8fae-234c482d801e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:08:34.116507 kubelet[3279]: E0117 12:08:34.116473 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a87b1eb5-ab2f-4531-8fae-234c482d801e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-cskls" podUID="a87b1eb5-ab2f-4531-8fae-234c482d801e" Jan 17 12:08:34.122764 containerd[1735]: time="2025-01-17T12:08:34.122711727Z" level=error msg="StopPodSandbox for \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\" failed" error="failed to destroy network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:34.123253 kubelet[3279]: E0117 12:08:34.123123 3279 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:34.123253 kubelet[3279]: E0117 12:08:34.123171 3279 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7"} Jan 17 12:08:34.123253 kubelet[3279]: E0117 12:08:34.123203 3279 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"054e26d4-3078-4a7b-9cfd-e882b8b74093\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:08:34.123253 kubelet[3279]: E0117 12:08:34.123224 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"054e26d4-3078-4a7b-9cfd-e882b8b74093\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85c7c4654d-9j4zk" podUID="054e26d4-3078-4a7b-9cfd-e882b8b74093" Jan 17 12:08:34.139417 containerd[1735]: time="2025-01-17T12:08:34.139357514Z" level=error msg="StopPodSandbox for \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\" failed" error="failed to destroy network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:34.139858 kubelet[3279]: E0117 12:08:34.139646 3279 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:34.139858 kubelet[3279]: E0117 12:08:34.139695 3279 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c"} Jan 17 12:08:34.139858 kubelet[3279]: E0117 12:08:34.139732 3279 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9ce371f8-6664-4b2d-8bfc-dc0423d17dd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:08:34.139858 kubelet[3279]: E0117 12:08:34.139756 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9ce371f8-6664-4b2d-8bfc-dc0423d17dd2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85c7c4654d-dvhbm" podUID="9ce371f8-6664-4b2d-8bfc-dc0423d17dd2" Jan 17 12:08:34.143482 containerd[1735]: time="2025-01-17T12:08:34.143342521Z" level=error msg="StopPodSandbox for \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\" failed" error="failed to destroy network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:34.144415 kubelet[3279]: E0117 12:08:34.144261 3279 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:34.144415 kubelet[3279]: E0117 12:08:34.144312 3279 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495"} Jan 17 12:08:34.144415 kubelet[3279]: E0117 12:08:34.144344 3279 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce3e0252-15d5-43de-ba60-d0523e069f90\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:08:34.144415 kubelet[3279]: E0117 12:08:34.144367 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce3e0252-15d5-43de-ba60-d0523e069f90\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-qn5wj" podUID="ce3e0252-15d5-43de-ba60-d0523e069f90" Jan 17 12:08:34.148903 containerd[1735]: time="2025-01-17T12:08:34.148847090Z" level=error msg="StopPodSandbox for \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\" failed" error="failed to destroy network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:34.149351 kubelet[3279]: E0117 12:08:34.149092 3279 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:34.149351 kubelet[3279]: E0117 12:08:34.149164 3279 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1"} Jan 17 12:08:34.149351 kubelet[3279]: E0117 12:08:34.149200 3279 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1a3420f-a34b-41c3-a151-733080e0373a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:08:34.149351 kubelet[3279]: E0117 12:08:34.149223 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1a3420f-a34b-41c3-a151-733080e0373a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zwfv8" podUID="c1a3420f-a34b-41c3-a151-733080e0373a" Jan 17 12:08:34.155198 containerd[1735]: time="2025-01-17T12:08:34.155148221Z" level=error msg="StopPodSandbox for \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\" failed" error="failed to destroy network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:08:34.155694 kubelet[3279]: E0117 12:08:34.155524 3279 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:34.155694 kubelet[3279]: E0117 12:08:34.155591 3279 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295"} Jan 17 12:08:34.155694 kubelet[3279]: E0117 12:08:34.155624 3279 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f694ce58-2a80-406a-a332-7d2c145777d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:08:34.155694 kubelet[3279]: E0117 12:08:34.155653 3279 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f694ce58-2a80-406a-a332-7d2c145777d9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f67964b8b-rrbh6" podUID="f694ce58-2a80-406a-a332-7d2c145777d9" Jan 17 12:08:40.118855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2946881763.mount: Deactivated successfully. Jan 17 12:08:40.412047 containerd[1735]: time="2025-01-17T12:08:40.411915922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:40.415942 containerd[1735]: time="2025-01-17T12:08:40.415880009Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=137671762" Jan 17 12:08:40.421237 containerd[1735]: time="2025-01-17T12:08:40.421165899Z" level=info msg="ImageCreate event name:\"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:40.425753 containerd[1735]: time="2025-01-17T12:08:40.425677827Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:40.426459 containerd[1735]: time="2025-01-17T12:08:40.426299948Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"137671624\" in 6.403471788s" Jan 17 12:08:40.426459 containerd[1735]: time="2025-01-17T12:08:40.426339908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Jan 17 12:08:40.443248 containerd[1735]: time="2025-01-17T12:08:40.442650058Z" level=info msg="CreateContainer within sandbox \"8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:08:40.490249 containerd[1735]: time="2025-01-17T12:08:40.490195506Z" level=info msg="CreateContainer within sandbox \"8d2ef6f2b224c1613375426b0eea29eedbccb541773583781d17c383ae954fe3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4702aa73da4b268e56a4e6ad07b18f8d97560d8d90533e345bf1eee4629e86d1\"" Jan 17 12:08:40.490991 containerd[1735]: time="2025-01-17T12:08:40.490950147Z" level=info msg="StartContainer for \"4702aa73da4b268e56a4e6ad07b18f8d97560d8d90533e345bf1eee4629e86d1\"" Jan 17 12:08:40.519648 systemd[1]: Started cri-containerd-4702aa73da4b268e56a4e6ad07b18f8d97560d8d90533e345bf1eee4629e86d1.scope - libcontainer container 4702aa73da4b268e56a4e6ad07b18f8d97560d8d90533e345bf1eee4629e86d1. Jan 17 12:08:40.558049 containerd[1735]: time="2025-01-17T12:08:40.557932431Z" level=info msg="StartContainer for \"4702aa73da4b268e56a4e6ad07b18f8d97560d8d90533e345bf1eee4629e86d1\" returns successfully" Jan 17 12:08:41.020686 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:08:41.020859 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:08:41.125259 kubelet[3279]: I0117 12:08:41.124482 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4wt4b" podStartSLOduration=1.520528644 podStartE2EDuration="20.124465034s" podCreationTimestamp="2025-01-17 12:08:21 +0000 UTC" firstStartedPulling="2025-01-17 12:08:21.82351356 +0000 UTC m=+24.071443913" lastFinishedPulling="2025-01-17 12:08:40.42744995 +0000 UTC m=+42.675380303" observedRunningTime="2025-01-17 12:08:41.123958593 +0000 UTC m=+43.371888946" watchObservedRunningTime="2025-01-17 12:08:41.124465034 +0000 UTC m=+43.372395347" Jan 17 12:08:42.672148 kernel: bpftool[4534]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:08:42.868998 systemd-networkd[1442]: vxlan.calico: Link UP Jan 17 12:08:42.869006 systemd-networkd[1442]: vxlan.calico: Gained carrier Jan 17 12:08:44.080348 systemd-networkd[1442]: vxlan.calico: Gained IPv6LL Jan 17 12:08:44.869463 containerd[1735]: time="2025-01-17T12:08:44.869148062Z" level=info msg="StopPodSandbox for \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\"" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.941 [INFO][4620] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.942 [INFO][4620] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" iface="eth0" netns="/var/run/netns/cni-2d79e53f-be89-b8a1-0f2e-54d34928493f" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.942 [INFO][4620] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" iface="eth0" netns="/var/run/netns/cni-2d79e53f-be89-b8a1-0f2e-54d34928493f" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.942 [INFO][4620] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" iface="eth0" netns="/var/run/netns/cni-2d79e53f-be89-b8a1-0f2e-54d34928493f" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.942 [INFO][4620] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.942 [INFO][4620] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.969 [INFO][4628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.969 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.969 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.978 [WARNING][4628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.978 [INFO][4628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.980 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:44.983945 containerd[1735]: 2025-01-17 12:08:44.982 [INFO][4620] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:44.986400 containerd[1735]: time="2025-01-17T12:08:44.984237289Z" level=info msg="TearDown network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\" successfully" Jan 17 12:08:44.986400 containerd[1735]: time="2025-01-17T12:08:44.986184772Z" level=info msg="StopPodSandbox for \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\" returns successfully" Jan 17 12:08:44.987751 containerd[1735]: time="2025-01-17T12:08:44.987031733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f67964b8b-rrbh6,Uid:f694ce58-2a80-406a-a332-7d2c145777d9,Namespace:calico-system,Attempt:1,}" Jan 17 12:08:44.987819 systemd[1]: run-netns-cni\x2d2d79e53f\x2dbe89\x2db8a1\x2d0f2e\x2d54d34928493f.mount: Deactivated successfully. Jan 17 12:08:45.173420 systemd-networkd[1442]: cali96cff1ebf16: Link UP Jan 17 12:08:45.175079 systemd-networkd[1442]: cali96cff1ebf16: Gained carrier Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.092 [INFO][4634] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0 calico-kube-controllers-7f67964b8b- calico-system f694ce58-2a80-406a-a332-7d2c145777d9 781 0 2025-01-17 12:08:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f67964b8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-c8756aff3b calico-kube-controllers-7f67964b8b-rrbh6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali96cff1ebf16 [] []}} ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Namespace="calico-system" Pod="calico-kube-controllers-7f67964b8b-rrbh6" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.092 [INFO][4634] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Namespace="calico-system" Pod="calico-kube-controllers-7f67964b8b-rrbh6" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.122 [INFO][4646] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" HandleID="k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.134 [INFO][4646] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" HandleID="k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316fd0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-c8756aff3b", "pod":"calico-kube-controllers-7f67964b8b-rrbh6", "timestamp":"2025-01-17 12:08:45.122819794 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c8756aff3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.135 [INFO][4646] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.135 [INFO][4646] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.135 [INFO][4646] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c8756aff3b' Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.136 [INFO][4646] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.140 [INFO][4646] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.144 [INFO][4646] ipam/ipam.go 489: Trying affinity for 192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.146 [INFO][4646] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.148 [INFO][4646] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.148 [INFO][4646] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.149 [INFO][4646] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78 Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.154 [INFO][4646] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.165 [INFO][4646] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.193/26] block=192.168.42.192/26 handle="k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.166 [INFO][4646] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.193/26] handle="k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.166 [INFO][4646] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:45.195889 containerd[1735]: 2025-01-17 12:08:45.166 [INFO][4646] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.193/26] IPv6=[] ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" HandleID="k8s-pod-network.9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:45.196806 containerd[1735]: 2025-01-17 12:08:45.168 [INFO][4634] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Namespace="calico-system" Pod="calico-kube-controllers-7f67964b8b-rrbh6" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0", GenerateName:"calico-kube-controllers-7f67964b8b-", Namespace:"calico-system", SelfLink:"", UID:"f694ce58-2a80-406a-a332-7d2c145777d9", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f67964b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"", Pod:"calico-kube-controllers-7f67964b8b-rrbh6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96cff1ebf16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:45.196806 containerd[1735]: 2025-01-17 12:08:45.168 [INFO][4634] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.193/32] ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Namespace="calico-system" Pod="calico-kube-controllers-7f67964b8b-rrbh6" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:45.196806 containerd[1735]: 2025-01-17 12:08:45.168 [INFO][4634] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96cff1ebf16 ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Namespace="calico-system" Pod="calico-kube-controllers-7f67964b8b-rrbh6" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:45.196806 containerd[1735]: 2025-01-17 12:08:45.173 [INFO][4634] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Namespace="calico-system" Pod="calico-kube-controllers-7f67964b8b-rrbh6" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:45.196806 containerd[1735]: 2025-01-17 12:08:45.173 [INFO][4634] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Namespace="calico-system" Pod="calico-kube-controllers-7f67964b8b-rrbh6" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0", GenerateName:"calico-kube-controllers-7f67964b8b-", Namespace:"calico-system", SelfLink:"", UID:"f694ce58-2a80-406a-a332-7d2c145777d9", ResourceVersion:"781", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f67964b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78", Pod:"calico-kube-controllers-7f67964b8b-rrbh6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96cff1ebf16", MAC:"6a:c6:09:d5:59:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:45.196806 containerd[1735]: 2025-01-17 12:08:45.189 [INFO][4634] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78" Namespace="calico-system" Pod="calico-kube-controllers-7f67964b8b-rrbh6" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:45.712594 containerd[1735]: time="2025-01-17T12:08:45.712340979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:45.712594 containerd[1735]: time="2025-01-17T12:08:45.712412219Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:45.712594 containerd[1735]: time="2025-01-17T12:08:45.712462420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:45.712594 containerd[1735]: time="2025-01-17T12:08:45.712563380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:45.737273 systemd[1]: Started cri-containerd-9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78.scope - libcontainer container 9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78. Jan 17 12:08:45.772517 containerd[1735]: time="2025-01-17T12:08:45.772469354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f67964b8b-rrbh6,Uid:f694ce58-2a80-406a-a332-7d2c145777d9,Namespace:calico-system,Attempt:1,} returns sandbox id \"9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78\"" Jan 17 12:08:45.774182 containerd[1735]: time="2025-01-17T12:08:45.774146877Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:08:45.865644 containerd[1735]: time="2025-01-17T12:08:45.865166381Z" level=info msg="StopPodSandbox for \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\"" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.920 [INFO][4719] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.922 [INFO][4719] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" iface="eth0" netns="/var/run/netns/cni-7212b984-d449-5056-e3f4-3698ae16f143" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.922 [INFO][4719] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" iface="eth0" netns="/var/run/netns/cni-7212b984-d449-5056-e3f4-3698ae16f143" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.923 [INFO][4719] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" iface="eth0" netns="/var/run/netns/cni-7212b984-d449-5056-e3f4-3698ae16f143" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.923 [INFO][4719] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.923 [INFO][4719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.942 [INFO][4725] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.942 [INFO][4725] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.943 [INFO][4725] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.951 [WARNING][4725] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.951 [INFO][4725] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.953 [INFO][4725] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:45.956609 containerd[1735]: 2025-01-17 12:08:45.955 [INFO][4719] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:45.957383 containerd[1735]: time="2025-01-17T12:08:45.957306846Z" level=info msg="TearDown network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\" successfully" Jan 17 12:08:45.957383 containerd[1735]: time="2025-01-17T12:08:45.957346846Z" level=info msg="StopPodSandbox for \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\" returns successfully" Jan 17 12:08:45.958276 containerd[1735]: time="2025-01-17T12:08:45.958182728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c7c4654d-dvhbm,Uid:9ce371f8-6664-4b2d-8bfc-dc0423d17dd2,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:08:45.986385 systemd[1]: run-netns-cni\x2d7212b984\x2dd449\x2d5056\x2de3f4\x2d3698ae16f143.mount: Deactivated successfully. Jan 17 12:08:46.403914 systemd-networkd[1442]: cali9b9de31bc06: Link UP Jan 17 12:08:46.404160 systemd-networkd[1442]: cali9b9de31bc06: Gained carrier Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.324 [INFO][4731] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0 calico-apiserver-85c7c4654d- calico-apiserver 9ce371f8-6664-4b2d-8bfc-dc0423d17dd2 790 0 2025-01-17 12:08:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85c7c4654d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-c8756aff3b calico-apiserver-85c7c4654d-dvhbm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9b9de31bc06 [] []}} ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-dvhbm" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.325 [INFO][4731] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-dvhbm" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.355 [INFO][4742] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" HandleID="k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.366 [INFO][4742] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" HandleID="k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333340), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-c8756aff3b", "pod":"calico-apiserver-85c7c4654d-dvhbm", "timestamp":"2025-01-17 12:08:46.355935436 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c8756aff3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.367 [INFO][4742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.367 [INFO][4742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.367 [INFO][4742] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c8756aff3b' Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.368 [INFO][4742] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.372 [INFO][4742] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.376 [INFO][4742] ipam/ipam.go 489: Trying affinity for 192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.378 [INFO][4742] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.380 [INFO][4742] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.380 [INFO][4742] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.382 [INFO][4742] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.389 [INFO][4742] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.397 [INFO][4742] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.194/26] block=192.168.42.192/26 handle="k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.398 [INFO][4742] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.194/26] handle="k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.398 [INFO][4742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:46.422223 containerd[1735]: 2025-01-17 12:08:46.398 [INFO][4742] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.194/26] IPv6=[] ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" HandleID="k8s-pod-network.1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:46.423356 containerd[1735]: 2025-01-17 12:08:46.399 [INFO][4731] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-dvhbm" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0", GenerateName:"calico-apiserver-85c7c4654d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ce371f8-6664-4b2d-8bfc-dc0423d17dd2", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c7c4654d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"", Pod:"calico-apiserver-85c7c4654d-dvhbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b9de31bc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:46.423356 containerd[1735]: 2025-01-17 12:08:46.400 [INFO][4731] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.194/32] ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-dvhbm" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:46.423356 containerd[1735]: 2025-01-17 12:08:46.400 [INFO][4731] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b9de31bc06 ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-dvhbm" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:46.423356 containerd[1735]: 2025-01-17 12:08:46.403 [INFO][4731] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-dvhbm" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:46.423356 containerd[1735]: 2025-01-17 12:08:46.405 [INFO][4731] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-dvhbm" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0", GenerateName:"calico-apiserver-85c7c4654d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ce371f8-6664-4b2d-8bfc-dc0423d17dd2", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c7c4654d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea", Pod:"calico-apiserver-85c7c4654d-dvhbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b9de31bc06", MAC:"2e:e7:7e:72:cc:b9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:46.423356 containerd[1735]: 2025-01-17 12:08:46.416 [INFO][4731] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-dvhbm" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:46.446664 containerd[1735]: time="2025-01-17T12:08:46.446535219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:46.446664 containerd[1735]: time="2025-01-17T12:08:46.446599379Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:46.446664 containerd[1735]: time="2025-01-17T12:08:46.446616419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:46.447445 containerd[1735]: time="2025-01-17T12:08:46.447341340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:46.475337 systemd[1]: Started cri-containerd-1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea.scope - libcontainer container 1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea. Jan 17 12:08:46.508444 containerd[1735]: time="2025-01-17T12:08:46.508407556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c7c4654d-dvhbm,Uid:9ce371f8-6664-4b2d-8bfc-dc0423d17dd2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea\"" Jan 17 12:08:46.865228 containerd[1735]: time="2025-01-17T12:08:46.865044599Z" level=info msg="StopPodSandbox for \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\"" Jan 17 12:08:46.865774 containerd[1735]: time="2025-01-17T12:08:46.865535600Z" level=info msg="StopPodSandbox for \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\"" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.937 [INFO][4832] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.938 [INFO][4832] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" iface="eth0" netns="/var/run/netns/cni-613dfc70-6a2d-486a-8f69-0cf674580fce" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.938 [INFO][4832] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" iface="eth0" netns="/var/run/netns/cni-613dfc70-6a2d-486a-8f69-0cf674580fce" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.938 [INFO][4832] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" iface="eth0" netns="/var/run/netns/cni-613dfc70-6a2d-486a-8f69-0cf674580fce" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.938 [INFO][4832] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.938 [INFO][4832] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.963 [INFO][4844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.964 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.964 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.972 [WARNING][4844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.972 [INFO][4844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.975 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:46.979746 containerd[1735]: 2025-01-17 12:08:46.978 [INFO][4832] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:46.981057 containerd[1735]: time="2025-01-17T12:08:46.980481622Z" level=info msg="TearDown network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\" successfully" Jan 17 12:08:46.981057 containerd[1735]: time="2025-01-17T12:08:46.980549022Z" level=info msg="StopPodSandbox for \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\" returns successfully" Jan 17 12:08:46.981706 containerd[1735]: time="2025-01-17T12:08:46.981514063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cskls,Uid:a87b1eb5-ab2f-4531-8fae-234c482d801e,Namespace:kube-system,Attempt:1,}" Jan 17 12:08:46.987359 systemd[1]: run-netns-cni\x2d613dfc70\x2d6a2d\x2d486a\x2d8f69\x2d0cf674580fce.mount: Deactivated successfully. Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.931 [INFO][4816] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.932 [INFO][4816] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" iface="eth0" netns="/var/run/netns/cni-f3824fb7-e95b-ce16-2824-794c65c889ef" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.932 [INFO][4816] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" iface="eth0" netns="/var/run/netns/cni-f3824fb7-e95b-ce16-2824-794c65c889ef" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.932 [INFO][4816] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" iface="eth0" netns="/var/run/netns/cni-f3824fb7-e95b-ce16-2824-794c65c889ef" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.932 [INFO][4816] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.932 [INFO][4816] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.967 [INFO][4840] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.967 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.976 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.990 [WARNING][4840] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.990 [INFO][4840] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.991 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:46.994901 containerd[1735]: 2025-01-17 12:08:46.993 [INFO][4816] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:46.996405 containerd[1735]: time="2025-01-17T12:08:46.995162045Z" level=info msg="TearDown network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\" successfully" Jan 17 12:08:46.996405 containerd[1735]: time="2025-01-17T12:08:46.995191565Z" level=info msg="StopPodSandbox for \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\" returns successfully" Jan 17 12:08:46.996405 containerd[1735]: time="2025-01-17T12:08:46.995757686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c7c4654d-9j4zk,Uid:054e26d4-3078-4a7b-9cfd-e882b8b74093,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:08:46.998073 systemd[1]: run-netns-cni\x2df3824fb7\x2de95b\x2dce16\x2d2824\x2d794c65c889ef.mount: Deactivated successfully. Jan 17 12:08:47.024320 systemd-networkd[1442]: cali96cff1ebf16: Gained IPv6LL Jan 17 12:08:47.206267 systemd-networkd[1442]: calid5002783072: Link UP Jan 17 12:08:47.210322 systemd-networkd[1442]: calid5002783072: Gained carrier Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.106 [INFO][4853] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0 coredns-7db6d8ff4d- kube-system a87b1eb5-ab2f-4531-8fae-234c482d801e 800 0 2025-01-17 12:08:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-c8756aff3b coredns-7db6d8ff4d-cskls eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid5002783072 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cskls" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.106 [INFO][4853] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cskls" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.153 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" HandleID="k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.165 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" HandleID="k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a0e60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-c8756aff3b", "pod":"coredns-7db6d8ff4d-cskls", "timestamp":"2025-01-17 12:08:47.153150454 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c8756aff3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.165 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.166 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.166 [INFO][4876] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c8756aff3b' Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.167 [INFO][4876] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.172 [INFO][4876] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.177 [INFO][4876] ipam/ipam.go 489: Trying affinity for 192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.179 [INFO][4876] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.182 [INFO][4876] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.182 [INFO][4876] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.183 [INFO][4876] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.189 [INFO][4876] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.197 [INFO][4876] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.195/26] block=192.168.42.192/26 handle="k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.198 [INFO][4876] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.195/26] handle="k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.198 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:47.234000 containerd[1735]: 2025-01-17 12:08:47.198 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.195/26] IPv6=[] ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" HandleID="k8s-pod-network.bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:47.235686 containerd[1735]: 2025-01-17 12:08:47.200 [INFO][4853] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cskls" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a87b1eb5-ab2f-4531-8fae-234c482d801e", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"", Pod:"coredns-7db6d8ff4d-cskls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5002783072", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:47.235686 containerd[1735]: 2025-01-17 12:08:47.200 [INFO][4853] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.195/32] ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cskls" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:47.235686 containerd[1735]: 2025-01-17 12:08:47.200 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5002783072 ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cskls" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:47.235686 containerd[1735]: 2025-01-17 12:08:47.208 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cskls" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:47.235686 containerd[1735]: 2025-01-17 12:08:47.208 [INFO][4853] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cskls" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a87b1eb5-ab2f-4531-8fae-234c482d801e", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad", Pod:"coredns-7db6d8ff4d-cskls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5002783072", MAC:"d6:8f:74:e1:e8:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:47.235686 containerd[1735]: 2025-01-17 12:08:47.230 [INFO][4853] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad" Namespace="kube-system" Pod="coredns-7db6d8ff4d-cskls" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:47.265031 systemd-networkd[1442]: cali752171d3204: Link UP Jan 17 12:08:47.266177 systemd-networkd[1442]: cali752171d3204: Gained carrier Jan 17 12:08:47.282039 containerd[1735]: time="2025-01-17T12:08:47.281799018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:47.282039 containerd[1735]: time="2025-01-17T12:08:47.281901178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:47.282039 containerd[1735]: time="2025-01-17T12:08:47.281920978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:47.282855 containerd[1735]: time="2025-01-17T12:08:47.282190818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.108 [INFO][4863] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0 calico-apiserver-85c7c4654d- calico-apiserver 054e26d4-3078-4a7b-9cfd-e882b8b74093 799 0 2025-01-17 12:08:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85c7c4654d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-c8756aff3b calico-apiserver-85c7c4654d-9j4zk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali752171d3204 [] []}} ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-9j4zk" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.108 [INFO][4863] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-9j4zk" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.155 [INFO][4877] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" HandleID="k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.170 [INFO][4877] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" HandleID="k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000316fd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-c8756aff3b", "pod":"calico-apiserver-85c7c4654d-9j4zk", "timestamp":"2025-01-17 12:08:47.155725578 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c8756aff3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.170 [INFO][4877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.198 [INFO][4877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.198 [INFO][4877] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c8756aff3b' Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.200 [INFO][4877] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.210 [INFO][4877] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.231 [INFO][4877] ipam/ipam.go 489: Trying affinity for 192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.235 [INFO][4877] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.238 [INFO][4877] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.239 [INFO][4877] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.241 [INFO][4877] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4 Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.250 [INFO][4877] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.257 [INFO][4877] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.196/26] block=192.168.42.192/26 handle="k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.257 [INFO][4877] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.196/26] handle="k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.257 [INFO][4877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:47.292043 containerd[1735]: 2025-01-17 12:08:47.257 [INFO][4877] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.196/26] IPv6=[] ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" HandleID="k8s-pod-network.c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:47.292593 containerd[1735]: 2025-01-17 12:08:47.260 [INFO][4863] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-9j4zk" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0", GenerateName:"calico-apiserver-85c7c4654d-", Namespace:"calico-apiserver", SelfLink:"", UID:"054e26d4-3078-4a7b-9cfd-e882b8b74093", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c7c4654d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"", Pod:"calico-apiserver-85c7c4654d-9j4zk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali752171d3204", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:47.292593 containerd[1735]: 2025-01-17 12:08:47.261 [INFO][4863] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.196/32] ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-9j4zk" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:47.292593 containerd[1735]: 2025-01-17 12:08:47.261 [INFO][4863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali752171d3204 ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-9j4zk" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:47.292593 containerd[1735]: 2025-01-17 12:08:47.267 [INFO][4863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-9j4zk" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:47.292593 containerd[1735]: 2025-01-17 12:08:47.267 [INFO][4863] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-9j4zk" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0", GenerateName:"calico-apiserver-85c7c4654d-", Namespace:"calico-apiserver", SelfLink:"", UID:"054e26d4-3078-4a7b-9cfd-e882b8b74093", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c7c4654d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4", Pod:"calico-apiserver-85c7c4654d-9j4zk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali752171d3204", MAC:"8a:1a:34:eb:e8:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:47.292593 containerd[1735]: 2025-01-17 12:08:47.285 [INFO][4863] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4" Namespace="calico-apiserver" Pod="calico-apiserver-85c7c4654d-9j4zk" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:47.317350 systemd[1]: Started cri-containerd-bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad.scope - libcontainer container bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad. Jan 17 12:08:47.328805 containerd[1735]: time="2025-01-17T12:08:47.328709132Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:47.328960 containerd[1735]: time="2025-01-17T12:08:47.328772572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:47.328960 containerd[1735]: time="2025-01-17T12:08:47.328805452Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:47.328960 containerd[1735]: time="2025-01-17T12:08:47.328894532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:47.348342 systemd[1]: Started cri-containerd-c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4.scope - libcontainer container c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4. Jan 17 12:08:47.361778 containerd[1735]: time="2025-01-17T12:08:47.361728104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-cskls,Uid:a87b1eb5-ab2f-4531-8fae-234c482d801e,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad\"" Jan 17 12:08:47.371825 containerd[1735]: time="2025-01-17T12:08:47.371747640Z" level=info msg="CreateContainer within sandbox \"bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:08:47.393363 containerd[1735]: time="2025-01-17T12:08:47.393313154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85c7c4654d-9j4zk,Uid:054e26d4-3078-4a7b-9cfd-e882b8b74093,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4\"" Jan 17 12:08:47.407781 containerd[1735]: time="2025-01-17T12:08:47.407649576Z" level=info msg="CreateContainer within sandbox \"bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"90dd61aae8d65b0fad164128f861a39a60f537cce58270b8b81adcc8bc65bd1a\"" Jan 17 12:08:47.408739 containerd[1735]: time="2025-01-17T12:08:47.408262257Z" level=info msg="StartContainer for \"90dd61aae8d65b0fad164128f861a39a60f537cce58270b8b81adcc8bc65bd1a\"" Jan 17 12:08:47.431358 systemd[1]: Started cri-containerd-90dd61aae8d65b0fad164128f861a39a60f537cce58270b8b81adcc8bc65bd1a.scope - libcontainer container 90dd61aae8d65b0fad164128f861a39a60f537cce58270b8b81adcc8bc65bd1a. Jan 17 12:08:47.457323 containerd[1735]: time="2025-01-17T12:08:47.457269375Z" level=info msg="StartContainer for \"90dd61aae8d65b0fad164128f861a39a60f537cce58270b8b81adcc8bc65bd1a\" returns successfully" Jan 17 12:08:47.866267 containerd[1735]: time="2025-01-17T12:08:47.865256179Z" level=info msg="StopPodSandbox for \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\"" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.916 [INFO][5046] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.917 [INFO][5046] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" iface="eth0" netns="/var/run/netns/cni-93dc853e-7749-0de8-b666-1b88ec2e6849" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.918 [INFO][5046] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" iface="eth0" netns="/var/run/netns/cni-93dc853e-7749-0de8-b666-1b88ec2e6849" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.919 [INFO][5046] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" iface="eth0" netns="/var/run/netns/cni-93dc853e-7749-0de8-b666-1b88ec2e6849" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.919 [INFO][5046] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.919 [INFO][5046] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.942 [INFO][5052] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.943 [INFO][5052] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.943 [INFO][5052] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.951 [WARNING][5052] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.951 [INFO][5052] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.963 [INFO][5052] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:47.966250 containerd[1735]: 2025-01-17 12:08:47.965 [INFO][5046] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:47.966718 containerd[1735]: time="2025-01-17T12:08:47.966460779Z" level=info msg="TearDown network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\" successfully" Jan 17 12:08:47.966718 containerd[1735]: time="2025-01-17T12:08:47.966489739Z" level=info msg="StopPodSandbox for \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\" returns successfully" Jan 17 12:08:47.967466 containerd[1735]: time="2025-01-17T12:08:47.967197020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zwfv8,Uid:c1a3420f-a34b-41c3-a151-733080e0373a,Namespace:calico-system,Attempt:1,}" Jan 17 12:08:47.990069 systemd[1]: run-netns-cni\x2d93dc853e\x2d7749\x2d0de8\x2db666\x2d1b88ec2e6849.mount: Deactivated successfully. Jan 17 12:08:48.049789 systemd-networkd[1442]: cali9b9de31bc06: Gained IPv6LL Jan 17 12:08:48.165585 systemd-networkd[1442]: cali60f49803e14: Link UP Jan 17 12:08:48.166714 systemd-networkd[1442]: cali60f49803e14: Gained carrier Jan 17 12:08:48.179901 kubelet[3279]: I0117 12:08:48.179494 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-cskls" podStartSLOduration=34.179440315 podStartE2EDuration="34.179440315s" podCreationTimestamp="2025-01-17 12:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:08:48.179130994 +0000 UTC m=+50.427061387" watchObservedRunningTime="2025-01-17 12:08:48.179440315 +0000 UTC m=+50.427370988" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.047 [INFO][5059] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0 csi-node-driver- calico-system c1a3420f-a34b-41c3-a151-733080e0373a 815 0 2025-01-17 12:08:21 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-c8756aff3b csi-node-driver-zwfv8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali60f49803e14 [] []}} ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Namespace="calico-system" Pod="csi-node-driver-zwfv8" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.048 [INFO][5059] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Namespace="calico-system" Pod="csi-node-driver-zwfv8" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.081 [INFO][5069] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" HandleID="k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.097 [INFO][5069] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" HandleID="k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000284790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-c8756aff3b", "pod":"csi-node-driver-zwfv8", "timestamp":"2025-01-17 12:08:48.081904121 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c8756aff3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.097 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.097 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.097 [INFO][5069] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c8756aff3b' Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.099 [INFO][5069] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.103 [INFO][5069] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.109 [INFO][5069] ipam/ipam.go 489: Trying affinity for 192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.112 [INFO][5069] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.115 [INFO][5069] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.115 [INFO][5069] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.118 [INFO][5069] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387 Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.125 [INFO][5069] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.147 [INFO][5069] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.197/26] block=192.168.42.192/26 handle="k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.147 [INFO][5069] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.197/26] handle="k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.147 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:48.226406 containerd[1735]: 2025-01-17 12:08:48.147 [INFO][5069] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.197/26] IPv6=[] ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" HandleID="k8s-pod-network.a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:48.227225 containerd[1735]: 2025-01-17 12:08:48.156 [INFO][5059] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Namespace="calico-system" Pod="csi-node-driver-zwfv8" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c1a3420f-a34b-41c3-a151-733080e0373a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"", Pod:"csi-node-driver-zwfv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali60f49803e14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:48.227225 containerd[1735]: 2025-01-17 12:08:48.157 [INFO][5059] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.197/32] ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Namespace="calico-system" Pod="csi-node-driver-zwfv8" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:48.227225 containerd[1735]: 2025-01-17 12:08:48.157 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60f49803e14 ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Namespace="calico-system" Pod="csi-node-driver-zwfv8" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:48.227225 containerd[1735]: 2025-01-17 12:08:48.166 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Namespace="calico-system" Pod="csi-node-driver-zwfv8" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:48.227225 containerd[1735]: 2025-01-17 12:08:48.170 [INFO][5059] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Namespace="calico-system" Pod="csi-node-driver-zwfv8" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c1a3420f-a34b-41c3-a151-733080e0373a", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387", Pod:"csi-node-driver-zwfv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali60f49803e14", MAC:"2a:99:cc:25:e6:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:48.227225 containerd[1735]: 2025-01-17 12:08:48.216 [INFO][5059] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387" Namespace="calico-system" Pod="csi-node-driver-zwfv8" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:48.263000 containerd[1735]: time="2025-01-17T12:08:48.262854207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:48.263000 containerd[1735]: time="2025-01-17T12:08:48.262940007Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:48.263000 containerd[1735]: time="2025-01-17T12:08:48.262952407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:48.263398 containerd[1735]: time="2025-01-17T12:08:48.263154287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:48.297352 systemd[1]: Started cri-containerd-a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387.scope - libcontainer container a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387. Jan 17 12:08:48.349938 containerd[1735]: time="2025-01-17T12:08:48.349879304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zwfv8,Uid:c1a3420f-a34b-41c3-a151-733080e0373a,Namespace:calico-system,Attempt:1,} returns sandbox id \"a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387\"" Jan 17 12:08:48.368425 systemd-networkd[1442]: calid5002783072: Gained IPv6LL Jan 17 12:08:48.864993 containerd[1735]: time="2025-01-17T12:08:48.864919117Z" level=info msg="StopPodSandbox for \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\"" Jan 17 12:08:48.944372 systemd-networkd[1442]: cali752171d3204: Gained IPv6LL Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.924 [INFO][5161] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.924 [INFO][5161] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" iface="eth0" netns="/var/run/netns/cni-a2fc3253-8dca-c085-de5f-48987b3f631b" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.925 [INFO][5161] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" iface="eth0" netns="/var/run/netns/cni-a2fc3253-8dca-c085-de5f-48987b3f631b" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.926 [INFO][5161] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" iface="eth0" netns="/var/run/netns/cni-a2fc3253-8dca-c085-de5f-48987b3f631b" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.926 [INFO][5161] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.926 [INFO][5161] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.958 [INFO][5170] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.958 [INFO][5170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.959 [INFO][5170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.973 [WARNING][5170] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.973 [INFO][5170] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.975 [INFO][5170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:48.978669 containerd[1735]: 2025-01-17 12:08:48.976 [INFO][5161] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:48.979920 containerd[1735]: time="2025-01-17T12:08:48.979197418Z" level=info msg="TearDown network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\" successfully" Jan 17 12:08:48.979920 containerd[1735]: time="2025-01-17T12:08:48.979234018Z" level=info msg="StopPodSandbox for \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\" returns successfully" Jan 17 12:08:48.980301 containerd[1735]: time="2025-01-17T12:08:48.980089939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qn5wj,Uid:ce3e0252-15d5-43de-ba60-d0523e069f90,Namespace:kube-system,Attempt:1,}" Jan 17 12:08:48.989499 systemd[1]: run-netns-cni\x2da2fc3253\x2d8dca\x2dc085\x2dde5f\x2d48987b3f631b.mount: Deactivated successfully. Jan 17 12:08:49.045651 containerd[1735]: time="2025-01-17T12:08:49.045586843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:49.050882 containerd[1735]: time="2025-01-17T12:08:49.050812931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=31953828" Jan 17 12:08:49.055130 containerd[1735]: time="2025-01-17T12:08:49.054970937Z" level=info msg="ImageCreate event name:\"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:49.068716 containerd[1735]: time="2025-01-17T12:08:49.068653999Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:49.070521 containerd[1735]: time="2025-01-17T12:08:49.070465402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"33323450\" in 3.296278405s" Jan 17 12:08:49.070521 containerd[1735]: time="2025-01-17T12:08:49.070514842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Jan 17 12:08:49.075232 containerd[1735]: time="2025-01-17T12:08:49.073319566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:08:49.099411 containerd[1735]: time="2025-01-17T12:08:49.099356567Z" level=info msg="CreateContainer within sandbox \"9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:08:49.166602 containerd[1735]: time="2025-01-17T12:08:49.166394353Z" level=info msg="CreateContainer within sandbox \"9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"8c0d519ed5190cf4caf4359f60ad1401ef73a01d7689d988b3f34282036a0012\"" Jan 17 12:08:49.168036 containerd[1735]: time="2025-01-17T12:08:49.168002516Z" level=info msg="StartContainer for \"8c0d519ed5190cf4caf4359f60ad1401ef73a01d7689d988b3f34282036a0012\"" Jan 17 12:08:49.204564 systemd[1]: Started cri-containerd-8c0d519ed5190cf4caf4359f60ad1401ef73a01d7689d988b3f34282036a0012.scope - libcontainer container 8c0d519ed5190cf4caf4359f60ad1401ef73a01d7689d988b3f34282036a0012. Jan 17 12:08:49.231220 systemd-networkd[1442]: cali67031210afb: Link UP Jan 17 12:08:49.231489 systemd-networkd[1442]: cali67031210afb: Gained carrier Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.113 [INFO][5176] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0 coredns-7db6d8ff4d- kube-system ce3e0252-15d5-43de-ba60-d0523e069f90 829 0 2025-01-17 12:08:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-c8756aff3b coredns-7db6d8ff4d-qn5wj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali67031210afb [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qn5wj" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.113 [INFO][5176] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qn5wj" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.147 [INFO][5190] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" HandleID="k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.161 [INFO][5190] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" HandleID="k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400028cb70), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-c8756aff3b", "pod":"coredns-7db6d8ff4d-qn5wj", "timestamp":"2025-01-17 12:08:49.147821084 +0000 UTC"}, Hostname:"ci-4081.3.0-a-c8756aff3b", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.163 [INFO][5190] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.163 [INFO][5190] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.163 [INFO][5190] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-c8756aff3b' Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.165 [INFO][5190] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.171 [INFO][5190] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.180 [INFO][5190] ipam/ipam.go 489: Trying affinity for 192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.183 [INFO][5190] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.188 [INFO][5190] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.192/26 host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.188 [INFO][5190] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.192/26 handle="k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.195 [INFO][5190] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018 Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.203 [INFO][5190] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.192/26 handle="k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.221 [INFO][5190] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.198/26] block=192.168.42.192/26 handle="k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.221 [INFO][5190] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.198/26] handle="k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" host="ci-4081.3.0-a-c8756aff3b" Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.221 [INFO][5190] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:49.270876 containerd[1735]: 2025-01-17 12:08:49.221 [INFO][5190] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.198/26] IPv6=[] ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" HandleID="k8s-pod-network.2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:49.271902 containerd[1735]: 2025-01-17 12:08:49.223 [INFO][5176] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qn5wj" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce3e0252-15d5-43de-ba60-d0523e069f90", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"", Pod:"coredns-7db6d8ff4d-qn5wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67031210afb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:49.271902 containerd[1735]: 2025-01-17 12:08:49.223 [INFO][5176] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.198/32] ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qn5wj" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:49.271902 containerd[1735]: 2025-01-17 12:08:49.223 [INFO][5176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali67031210afb ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qn5wj" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:49.271902 containerd[1735]: 2025-01-17 12:08:49.228 [INFO][5176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qn5wj" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:49.271902 containerd[1735]: 2025-01-17 12:08:49.228 [INFO][5176] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qn5wj" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce3e0252-15d5-43de-ba60-d0523e069f90", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018", Pod:"coredns-7db6d8ff4d-qn5wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67031210afb", MAC:"92:9f:0a:a5:1f:f8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:49.271902 containerd[1735]: 2025-01-17 12:08:49.265 [INFO][5176] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018" Namespace="kube-system" Pod="coredns-7db6d8ff4d-qn5wj" WorkloadEndpoint="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:49.283046 containerd[1735]: time="2025-01-17T12:08:49.282895657Z" level=info msg="StartContainer for \"8c0d519ed5190cf4caf4359f60ad1401ef73a01d7689d988b3f34282036a0012\" returns successfully" Jan 17 12:08:49.321965 containerd[1735]: time="2025-01-17T12:08:49.321267678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:08:49.324332 containerd[1735]: time="2025-01-17T12:08:49.321722959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:08:49.324332 containerd[1735]: time="2025-01-17T12:08:49.321803919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:49.324332 containerd[1735]: time="2025-01-17T12:08:49.321917719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:08:49.354807 systemd[1]: Started cri-containerd-2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018.scope - libcontainer container 2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018. Jan 17 12:08:49.399816 containerd[1735]: time="2025-01-17T12:08:49.399755322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-qn5wj,Uid:ce3e0252-15d5-43de-ba60-d0523e069f90,Namespace:kube-system,Attempt:1,} returns sandbox id \"2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018\"" Jan 17 12:08:49.404703 containerd[1735]: time="2025-01-17T12:08:49.404558569Z" level=info msg="CreateContainer within sandbox \"2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:08:49.446714 containerd[1735]: time="2025-01-17T12:08:49.446511836Z" level=info msg="CreateContainer within sandbox \"2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9444a4c60643914027a9dff5464c3d57c80a7c25469a4ad860057b1cf6458201\"" Jan 17 12:08:49.448876 containerd[1735]: time="2025-01-17T12:08:49.448315438Z" level=info msg="StartContainer for \"9444a4c60643914027a9dff5464c3d57c80a7c25469a4ad860057b1cf6458201\"" Jan 17 12:08:49.480233 systemd[1]: Started cri-containerd-9444a4c60643914027a9dff5464c3d57c80a7c25469a4ad860057b1cf6458201.scope - libcontainer container 9444a4c60643914027a9dff5464c3d57c80a7c25469a4ad860057b1cf6458201. Jan 17 12:08:49.531055 containerd[1735]: time="2025-01-17T12:08:49.530993209Z" level=info msg="StartContainer for \"9444a4c60643914027a9dff5464c3d57c80a7c25469a4ad860057b1cf6458201\" returns successfully" Jan 17 12:08:49.776303 systemd-networkd[1442]: cali60f49803e14: Gained IPv6LL Jan 17 12:08:50.180953 systemd[1]: run-containerd-runc-k8s.io-8c0d519ed5190cf4caf4359f60ad1401ef73a01d7689d988b3f34282036a0012-runc.uHgfi9.mount: Deactivated successfully. Jan 17 12:08:50.198767 kubelet[3279]: I0117 12:08:50.198471 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-qn5wj" podStartSLOduration=36.198450983 podStartE2EDuration="36.198450983s" podCreationTimestamp="2025-01-17 12:08:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:08:50.165533091 +0000 UTC m=+52.413463444" watchObservedRunningTime="2025-01-17 12:08:50.198450983 +0000 UTC m=+52.446381296" Jan 17 12:08:50.308463 kubelet[3279]: I0117 12:08:50.308391 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f67964b8b-rrbh6" podStartSLOduration=26.009252106 podStartE2EDuration="29.308371116s" podCreationTimestamp="2025-01-17 12:08:21 +0000 UTC" firstStartedPulling="2025-01-17 12:08:45.773760396 +0000 UTC m=+48.021690749" lastFinishedPulling="2025-01-17 12:08:49.072879406 +0000 UTC m=+51.320809759" observedRunningTime="2025-01-17 12:08:50.238371046 +0000 UTC m=+52.486301399" watchObservedRunningTime="2025-01-17 12:08:50.308371116 +0000 UTC m=+52.556301429" Jan 17 12:08:50.672285 systemd-networkd[1442]: cali67031210afb: Gained IPv6LL Jan 17 12:08:51.415741 containerd[1735]: time="2025-01-17T12:08:51.415680425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:51.417741 containerd[1735]: time="2025-01-17T12:08:51.417694188Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=39298409" Jan 17 12:08:51.421351 containerd[1735]: time="2025-01-17T12:08:51.421299714Z" level=info msg="ImageCreate event name:\"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:51.427922 containerd[1735]: time="2025-01-17T12:08:51.427838764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:51.428795 containerd[1735]: time="2025-01-17T12:08:51.428661565Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 2.355295679s" Jan 17 12:08:51.428795 containerd[1735]: time="2025-01-17T12:08:51.428700045Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:08:51.431905 containerd[1735]: time="2025-01-17T12:08:51.431427730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:08:51.433032 containerd[1735]: time="2025-01-17T12:08:51.432990732Z" level=info msg="CreateContainer within sandbox \"1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:08:51.476828 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2295449292.mount: Deactivated successfully. Jan 17 12:08:51.490063 containerd[1735]: time="2025-01-17T12:08:51.489995142Z" level=info msg="CreateContainer within sandbox \"1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3bfd6ccda45c11e685484e4ae4469731d9bcafbe0af9f5de749e7d2677cd164c\"" Jan 17 12:08:51.492673 containerd[1735]: time="2025-01-17T12:08:51.492379586Z" level=info msg="StartContainer for \"3bfd6ccda45c11e685484e4ae4469731d9bcafbe0af9f5de749e7d2677cd164c\"" Jan 17 12:08:51.548690 systemd[1]: Started cri-containerd-3bfd6ccda45c11e685484e4ae4469731d9bcafbe0af9f5de749e7d2677cd164c.scope - libcontainer container 3bfd6ccda45c11e685484e4ae4469731d9bcafbe0af9f5de749e7d2677cd164c. Jan 17 12:08:51.597462 containerd[1735]: time="2025-01-17T12:08:51.597399152Z" level=info msg="StartContainer for \"3bfd6ccda45c11e685484e4ae4469731d9bcafbe0af9f5de749e7d2677cd164c\" returns successfully" Jan 17 12:08:51.763026 containerd[1735]: time="2025-01-17T12:08:51.761816451Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:51.766200 containerd[1735]: time="2025-01-17T12:08:51.766164738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 17 12:08:51.767814 containerd[1735]: time="2025-01-17T12:08:51.767784061Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"40668079\" in 336.316691ms" Jan 17 12:08:51.767933 containerd[1735]: time="2025-01-17T12:08:51.767916621Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Jan 17 12:08:51.769593 containerd[1735]: time="2025-01-17T12:08:51.769432583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:08:51.777037 containerd[1735]: time="2025-01-17T12:08:51.776822395Z" level=info msg="CreateContainer within sandbox \"c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:08:51.821647 containerd[1735]: time="2025-01-17T12:08:51.821514226Z" level=info msg="CreateContainer within sandbox \"c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"2e5242e2c45cc7c519ba095f52a01364be7e1449abf123a4e99eaa02911e05dc\"" Jan 17 12:08:51.824601 containerd[1735]: time="2025-01-17T12:08:51.823097308Z" level=info msg="StartContainer for \"2e5242e2c45cc7c519ba095f52a01364be7e1449abf123a4e99eaa02911e05dc\"" Jan 17 12:08:51.866979 systemd[1]: Started cri-containerd-2e5242e2c45cc7c519ba095f52a01364be7e1449abf123a4e99eaa02911e05dc.scope - libcontainer container 2e5242e2c45cc7c519ba095f52a01364be7e1449abf123a4e99eaa02911e05dc. Jan 17 12:08:51.938524 containerd[1735]: time="2025-01-17T12:08:51.938444010Z" level=info msg="StartContainer for \"2e5242e2c45cc7c519ba095f52a01364be7e1449abf123a4e99eaa02911e05dc\" returns successfully" Jan 17 12:08:52.191805 kubelet[3279]: I0117 12:08:52.191122 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85c7c4654d-9j4zk" podStartSLOduration=26.817278583 podStartE2EDuration="31.191092649s" podCreationTimestamp="2025-01-17 12:08:21 +0000 UTC" firstStartedPulling="2025-01-17 12:08:47.394920036 +0000 UTC m=+49.642850349" lastFinishedPulling="2025-01-17 12:08:51.768734062 +0000 UTC m=+54.016664415" observedRunningTime="2025-01-17 12:08:52.190526808 +0000 UTC m=+54.438457161" watchObservedRunningTime="2025-01-17 12:08:52.191092649 +0000 UTC m=+54.439023002" Jan 17 12:08:53.175467 kubelet[3279]: I0117 12:08:53.175347 3279 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:08:53.464793 kubelet[3279]: I0117 12:08:53.464612 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85c7c4654d-dvhbm" podStartSLOduration=27.545366374 podStartE2EDuration="32.464589861s" podCreationTimestamp="2025-01-17 12:08:21 +0000 UTC" firstStartedPulling="2025-01-17 12:08:46.51050892 +0000 UTC m=+48.758439273" lastFinishedPulling="2025-01-17 12:08:51.429732407 +0000 UTC m=+53.677662760" observedRunningTime="2025-01-17 12:08:52.214867007 +0000 UTC m=+54.462797360" watchObservedRunningTime="2025-01-17 12:08:53.464589861 +0000 UTC m=+55.712520214" Jan 17 12:08:54.724777 containerd[1735]: time="2025-01-17T12:08:54.724721203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:54.728127 containerd[1735]: time="2025-01-17T12:08:54.728047528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7464730" Jan 17 12:08:54.732466 containerd[1735]: time="2025-01-17T12:08:54.732407975Z" level=info msg="ImageCreate event name:\"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:54.737924 containerd[1735]: time="2025-01-17T12:08:54.737855744Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:54.740169 containerd[1735]: time="2025-01-17T12:08:54.739859868Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"8834384\" in 2.970399685s" Jan 17 12:08:54.740169 containerd[1735]: time="2025-01-17T12:08:54.739900188Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Jan 17 12:08:54.744475 containerd[1735]: time="2025-01-17T12:08:54.744339555Z" level=info msg="CreateContainer within sandbox \"a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:08:54.781659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount114699592.mount: Deactivated successfully. Jan 17 12:08:54.792520 containerd[1735]: time="2025-01-17T12:08:54.792475876Z" level=info msg="CreateContainer within sandbox \"a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6920e9af3ad6cd3c20db13521bcaf48461b6fc1d601cd86f2a44a945670d4f00\"" Jan 17 12:08:54.793469 containerd[1735]: time="2025-01-17T12:08:54.793436837Z" level=info msg="StartContainer for \"6920e9af3ad6cd3c20db13521bcaf48461b6fc1d601cd86f2a44a945670d4f00\"" Jan 17 12:08:54.832372 systemd[1]: run-containerd-runc-k8s.io-6920e9af3ad6cd3c20db13521bcaf48461b6fc1d601cd86f2a44a945670d4f00-runc.uubWvf.mount: Deactivated successfully. Jan 17 12:08:54.839358 systemd[1]: Started cri-containerd-6920e9af3ad6cd3c20db13521bcaf48461b6fc1d601cd86f2a44a945670d4f00.scope - libcontainer container 6920e9af3ad6cd3c20db13521bcaf48461b6fc1d601cd86f2a44a945670d4f00. Jan 17 12:08:54.870177 containerd[1735]: time="2025-01-17T12:08:54.870048885Z" level=info msg="StartContainer for \"6920e9af3ad6cd3c20db13521bcaf48461b6fc1d601cd86f2a44a945670d4f00\" returns successfully" Jan 17 12:08:54.872089 containerd[1735]: time="2025-01-17T12:08:54.871842288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:08:56.531143 containerd[1735]: time="2025-01-17T12:08:56.530884735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:56.532938 containerd[1735]: time="2025-01-17T12:08:56.532904859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=9883368" Jan 17 12:08:56.539815 containerd[1735]: time="2025-01-17T12:08:56.539748670Z" level=info msg="ImageCreate event name:\"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:56.545923 containerd[1735]: time="2025-01-17T12:08:56.545834240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:08:56.547090 containerd[1735]: time="2025-01-17T12:08:56.546701082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11252974\" in 1.674819474s" Jan 17 12:08:56.547090 containerd[1735]: time="2025-01-17T12:08:56.546742042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Jan 17 12:08:56.550784 containerd[1735]: time="2025-01-17T12:08:56.550720688Z" level=info msg="CreateContainer within sandbox \"a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:08:56.588902 containerd[1735]: time="2025-01-17T12:08:56.588813552Z" level=info msg="CreateContainer within sandbox \"a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"231b1eabed67cfbdb08bcefdfa2d815a8471b71a883074db3e7b119b69aa138d\"" Jan 17 12:08:56.590927 containerd[1735]: time="2025-01-17T12:08:56.589628073Z" level=info msg="StartContainer for \"231b1eabed67cfbdb08bcefdfa2d815a8471b71a883074db3e7b119b69aa138d\"" Jan 17 12:08:56.626287 systemd[1]: Started cri-containerd-231b1eabed67cfbdb08bcefdfa2d815a8471b71a883074db3e7b119b69aa138d.scope - libcontainer container 231b1eabed67cfbdb08bcefdfa2d815a8471b71a883074db3e7b119b69aa138d. Jan 17 12:08:56.660626 containerd[1735]: time="2025-01-17T12:08:56.660568551Z" level=info msg="StartContainer for \"231b1eabed67cfbdb08bcefdfa2d815a8471b71a883074db3e7b119b69aa138d\" returns successfully" Jan 17 12:08:57.097954 kubelet[3279]: I0117 12:08:57.097880 3279 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:08:57.101387 kubelet[3279]: I0117 12:08:57.101356 3279 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:08:58.044236 containerd[1735]: time="2025-01-17T12:08:58.044172539Z" level=info msg="StopPodSandbox for \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\"" Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.100 [WARNING][5549] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0", GenerateName:"calico-kube-controllers-7f67964b8b-", Namespace:"calico-system", SelfLink:"", UID:"f694ce58-2a80-406a-a332-7d2c145777d9", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f67964b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78", Pod:"calico-kube-controllers-7f67964b8b-rrbh6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96cff1ebf16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.100 [INFO][5549] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.100 [INFO][5549] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" iface="eth0" netns="" Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.100 [INFO][5549] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.100 [INFO][5549] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.120 [INFO][5555] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.120 [INFO][5555] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.120 [INFO][5555] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.129 [WARNING][5555] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.130 [INFO][5555] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.131 [INFO][5555] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.134755 containerd[1735]: 2025-01-17 12:08:58.133 [INFO][5549] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:58.135475 containerd[1735]: time="2025-01-17T12:08:58.134808810Z" level=info msg="TearDown network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\" successfully" Jan 17 12:08:58.135475 containerd[1735]: time="2025-01-17T12:08:58.134836490Z" level=info msg="StopPodSandbox for \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\" returns successfully" Jan 17 12:08:58.136067 containerd[1735]: time="2025-01-17T12:08:58.135862892Z" level=info msg="RemovePodSandbox for \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\"" Jan 17 12:08:58.136067 containerd[1735]: time="2025-01-17T12:08:58.135899972Z" level=info msg="Forcibly stopping sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\"" Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.170 [WARNING][5574] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0", GenerateName:"calico-kube-controllers-7f67964b8b-", Namespace:"calico-system", SelfLink:"", UID:"f694ce58-2a80-406a-a332-7d2c145777d9", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f67964b8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"9d45ee43997fbd28c3ccc073fb10e523f4bbb5c2fb4c802132f04bd02fc2cd78", Pod:"calico-kube-controllers-7f67964b8b-rrbh6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali96cff1ebf16", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.171 [INFO][5574] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.171 [INFO][5574] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" iface="eth0" netns="" Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.171 [INFO][5574] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.171 [INFO][5574] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.190 [INFO][5581] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.190 [INFO][5581] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.190 [INFO][5581] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.198 [WARNING][5581] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.199 [INFO][5581] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" HandleID="k8s-pod-network.9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--kube--controllers--7f67964b8b--rrbh6-eth0" Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.200 [INFO][5581] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.203290 containerd[1735]: 2025-01-17 12:08:58.202 [INFO][5574] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295" Jan 17 12:08:58.204422 containerd[1735]: time="2025-01-17T12:08:58.203389445Z" level=info msg="TearDown network for sandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\" successfully" Jan 17 12:08:58.210129 containerd[1735]: time="2025-01-17T12:08:58.210061576Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:58.210286 containerd[1735]: time="2025-01-17T12:08:58.210158136Z" level=info msg="RemovePodSandbox \"9e72851e38ac5dcf12bc30e313a10b59b29f22b81180f409e8fffd3a4a500295\" returns successfully" Jan 17 12:08:58.211067 containerd[1735]: time="2025-01-17T12:08:58.210829177Z" level=info msg="StopPodSandbox for \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\"" Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.263 [WARNING][5599] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce3e0252-15d5-43de-ba60-d0523e069f90", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018", Pod:"coredns-7db6d8ff4d-qn5wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67031210afb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.263 [INFO][5599] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.263 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" iface="eth0" netns="" Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.263 [INFO][5599] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.263 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.284 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.284 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.284 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.292 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.292 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.294 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.297465 containerd[1735]: 2025-01-17 12:08:58.295 [INFO][5599] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:58.298265 containerd[1735]: time="2025-01-17T12:08:58.297916123Z" level=info msg="TearDown network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\" successfully" Jan 17 12:08:58.298265 containerd[1735]: time="2025-01-17T12:08:58.297947203Z" level=info msg="StopPodSandbox for \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\" returns successfully" Jan 17 12:08:58.298443 containerd[1735]: time="2025-01-17T12:08:58.298409203Z" level=info msg="RemovePodSandbox for \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\"" Jan 17 12:08:58.298490 containerd[1735]: time="2025-01-17T12:08:58.298448083Z" level=info msg="Forcibly stopping sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\"" Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.334 [WARNING][5626] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ce3e0252-15d5-43de-ba60-d0523e069f90", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"2cf627686e2a1a3563d38c95c024c3ae3198d2e2a1456636aeef6f515992b018", Pod:"coredns-7db6d8ff4d-qn5wj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali67031210afb", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.335 [INFO][5626] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.335 [INFO][5626] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" iface="eth0" netns="" Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.335 [INFO][5626] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.335 [INFO][5626] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.355 [INFO][5632] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.355 [INFO][5632] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.355 [INFO][5632] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.364 [WARNING][5632] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.364 [INFO][5632] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" HandleID="k8s-pod-network.7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--qn5wj-eth0" Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.365 [INFO][5632] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.369372 containerd[1735]: 2025-01-17 12:08:58.367 [INFO][5626] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495" Jan 17 12:08:58.369372 containerd[1735]: time="2025-01-17T12:08:58.369357402Z" level=info msg="TearDown network for sandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\" successfully" Jan 17 12:08:58.396146 containerd[1735]: time="2025-01-17T12:08:58.396082006Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:58.396380 containerd[1735]: time="2025-01-17T12:08:58.396181246Z" level=info msg="RemovePodSandbox \"7694895dab580fda6f46e9f59cafaeab64e8303c47e96b2f11c2ffb1bfd29495\" returns successfully" Jan 17 12:08:58.397059 containerd[1735]: time="2025-01-17T12:08:58.396764727Z" level=info msg="StopPodSandbox for \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\"" Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.431 [WARNING][5650] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0", GenerateName:"calico-apiserver-85c7c4654d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ce371f8-6664-4b2d-8bfc-dc0423d17dd2", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c7c4654d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea", Pod:"calico-apiserver-85c7c4654d-dvhbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b9de31bc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.432 [INFO][5650] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.432 [INFO][5650] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" iface="eth0" netns="" Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.432 [INFO][5650] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.432 [INFO][5650] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.449 [INFO][5656] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.450 [INFO][5656] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.450 [INFO][5656] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.457 [WARNING][5656] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.457 [INFO][5656] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.459 [INFO][5656] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.461552 containerd[1735]: 2025-01-17 12:08:58.460 [INFO][5650] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:58.462344 containerd[1735]: time="2025-01-17T12:08:58.461593716Z" level=info msg="TearDown network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\" successfully" Jan 17 12:08:58.462344 containerd[1735]: time="2025-01-17T12:08:58.461619676Z" level=info msg="StopPodSandbox for \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\" returns successfully" Jan 17 12:08:58.462856 containerd[1735]: time="2025-01-17T12:08:58.462501957Z" level=info msg="RemovePodSandbox for \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\"" Jan 17 12:08:58.462856 containerd[1735]: time="2025-01-17T12:08:58.462544477Z" level=info msg="Forcibly stopping sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\"" Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.496 [WARNING][5674] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0", GenerateName:"calico-apiserver-85c7c4654d-", Namespace:"calico-apiserver", SelfLink:"", UID:"9ce371f8-6664-4b2d-8bfc-dc0423d17dd2", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c7c4654d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"1b0bb6dd2d3860acad487e3bb3df6e4fb4cf03c34fc4bd9a2ba7261e0e533aea", Pod:"calico-apiserver-85c7c4654d-dvhbm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9b9de31bc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.496 [INFO][5674] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.496 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" iface="eth0" netns="" Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.496 [INFO][5674] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.496 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.520 [INFO][5680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.520 [INFO][5680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.520 [INFO][5680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.528 [WARNING][5680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.528 [INFO][5680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" HandleID="k8s-pod-network.9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--dvhbm-eth0" Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.530 [INFO][5680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.532702 containerd[1735]: 2025-01-17 12:08:58.531 [INFO][5674] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c" Jan 17 12:08:58.533230 containerd[1735]: time="2025-01-17T12:08:58.532748674Z" level=info msg="TearDown network for sandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\" successfully" Jan 17 12:08:58.540987 containerd[1735]: time="2025-01-17T12:08:58.540924928Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:58.541190 containerd[1735]: time="2025-01-17T12:08:58.541006368Z" level=info msg="RemovePodSandbox \"9a4ca5df0991b2e896ca36a016f61383ac367d50d8a66454c870f33f28996c4c\" returns successfully" Jan 17 12:08:58.541716 containerd[1735]: time="2025-01-17T12:08:58.541451769Z" level=info msg="StopPodSandbox for \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\"" Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.582 [WARNING][5698] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0", GenerateName:"calico-apiserver-85c7c4654d-", Namespace:"calico-apiserver", SelfLink:"", UID:"054e26d4-3078-4a7b-9cfd-e882b8b74093", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c7c4654d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4", Pod:"calico-apiserver-85c7c4654d-9j4zk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali752171d3204", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.582 [INFO][5698] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.582 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" iface="eth0" netns="" Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.582 [INFO][5698] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.582 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.602 [INFO][5704] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.602 [INFO][5704] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.602 [INFO][5704] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.610 [WARNING][5704] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.610 [INFO][5704] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.614 [INFO][5704] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.616968 containerd[1735]: 2025-01-17 12:08:58.615 [INFO][5698] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:58.617672 containerd[1735]: time="2025-01-17T12:08:58.617539416Z" level=info msg="TearDown network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\" successfully" Jan 17 12:08:58.617672 containerd[1735]: time="2025-01-17T12:08:58.617573816Z" level=info msg="StopPodSandbox for \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\" returns successfully" Jan 17 12:08:58.618098 containerd[1735]: time="2025-01-17T12:08:58.618065256Z" level=info msg="RemovePodSandbox for \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\"" Jan 17 12:08:58.618221 containerd[1735]: time="2025-01-17T12:08:58.618180617Z" level=info msg="Forcibly stopping sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\"" Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.654 [WARNING][5722] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0", GenerateName:"calico-apiserver-85c7c4654d-", Namespace:"calico-apiserver", SelfLink:"", UID:"054e26d4-3078-4a7b-9cfd-e882b8b74093", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85c7c4654d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"c150f52d1a38f79525dc5ae6e96c3e75777ead4b11b23995caebe5ead1dabaa4", Pod:"calico-apiserver-85c7c4654d-9j4zk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.42.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali752171d3204", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.654 [INFO][5722] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.655 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" iface="eth0" netns="" Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.655 [INFO][5722] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.655 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.675 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.676 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.676 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.684 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.685 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" HandleID="k8s-pod-network.16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Workload="ci--4081.3.0--a--c8756aff3b-k8s-calico--apiserver--85c7c4654d--9j4zk-eth0" Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.686 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.689648 containerd[1735]: 2025-01-17 12:08:58.688 [INFO][5722] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7" Jan 17 12:08:58.689648 containerd[1735]: time="2025-01-17T12:08:58.689634816Z" level=info msg="TearDown network for sandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\" successfully" Jan 17 12:08:58.697456 containerd[1735]: time="2025-01-17T12:08:58.697400709Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:58.697727 containerd[1735]: time="2025-01-17T12:08:58.697477269Z" level=info msg="RemovePodSandbox \"16881d28fc9a11ddd6fe31a3b900db5423baaac3d274348bd7b1ccd1aec11cf7\" returns successfully" Jan 17 12:08:58.697972 containerd[1735]: time="2025-01-17T12:08:58.697939270Z" level=info msg="StopPodSandbox for \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\"" Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.731 [WARNING][5746] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a87b1eb5-ab2f-4531-8fae-234c482d801e", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad", Pod:"coredns-7db6d8ff4d-cskls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5002783072", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.731 [INFO][5746] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.731 [INFO][5746] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" iface="eth0" netns="" Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.732 [INFO][5746] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.732 [INFO][5746] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.752 [INFO][5752] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.752 [INFO][5752] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.752 [INFO][5752] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.760 [WARNING][5752] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.760 [INFO][5752] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.761 [INFO][5752] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.763978 containerd[1735]: 2025-01-17 12:08:58.762 [INFO][5746] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:58.764581 containerd[1735]: time="2025-01-17T12:08:58.764009300Z" level=info msg="TearDown network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\" successfully" Jan 17 12:08:58.764581 containerd[1735]: time="2025-01-17T12:08:58.764035260Z" level=info msg="StopPodSandbox for \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\" returns successfully" Jan 17 12:08:58.765074 containerd[1735]: time="2025-01-17T12:08:58.764776901Z" level=info msg="RemovePodSandbox for \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\"" Jan 17 12:08:58.765074 containerd[1735]: time="2025-01-17T12:08:58.764814381Z" level=info msg="Forcibly stopping sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\"" Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.801 [WARNING][5770] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"a87b1eb5-ab2f-4531-8fae-234c482d801e", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"bd4c7bc8fb4fe41788a6058a9317413af5e8f353f0fe4619d1c55a93ef787aad", Pod:"coredns-7db6d8ff4d-cskls", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.42.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid5002783072", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.801 [INFO][5770] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.801 [INFO][5770] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" iface="eth0" netns="" Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.801 [INFO][5770] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.801 [INFO][5770] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.823 [INFO][5776] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.823 [INFO][5776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.823 [INFO][5776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.831 [WARNING][5776] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.831 [INFO][5776] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" HandleID="k8s-pod-network.8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Workload="ci--4081.3.0--a--c8756aff3b-k8s-coredns--7db6d8ff4d--cskls-eth0" Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.833 [INFO][5776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.836329 containerd[1735]: 2025-01-17 12:08:58.834 [INFO][5770] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6" Jan 17 12:08:58.837393 containerd[1735]: time="2025-01-17T12:08:58.836927222Z" level=info msg="TearDown network for sandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\" successfully" Jan 17 12:08:58.845237 containerd[1735]: time="2025-01-17T12:08:58.845151235Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:58.845391 containerd[1735]: time="2025-01-17T12:08:58.845261355Z" level=info msg="RemovePodSandbox \"8150f4b645bde6f027d591f776b53749d4e730a6285dffa942dbbc5db9c254f6\" returns successfully" Jan 17 12:08:58.846193 containerd[1735]: time="2025-01-17T12:08:58.845976837Z" level=info msg="StopPodSandbox for \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\"" Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.883 [WARNING][5794] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c1a3420f-a34b-41c3-a151-733080e0373a", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387", Pod:"csi-node-driver-zwfv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali60f49803e14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.883 [INFO][5794] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.883 [INFO][5794] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" iface="eth0" netns="" Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.883 [INFO][5794] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.883 [INFO][5794] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.903 [INFO][5800] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.903 [INFO][5800] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.903 [INFO][5800] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.911 [WARNING][5800] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.911 [INFO][5800] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.912 [INFO][5800] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.915360 containerd[1735]: 2025-01-17 12:08:58.913 [INFO][5794] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:58.915360 containerd[1735]: time="2025-01-17T12:08:58.915342072Z" level=info msg="TearDown network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\" successfully" Jan 17 12:08:58.917229 containerd[1735]: time="2025-01-17T12:08:58.915366432Z" level=info msg="StopPodSandbox for \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\" returns successfully" Jan 17 12:08:58.917229 containerd[1735]: time="2025-01-17T12:08:58.915815513Z" level=info msg="RemovePodSandbox for \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\"" Jan 17 12:08:58.917229 containerd[1735]: time="2025-01-17T12:08:58.915843353Z" level=info msg="Forcibly stopping sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\"" Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.952 [WARNING][5818] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c1a3420f-a34b-41c3-a151-733080e0373a", ResourceVersion:"901", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 8, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-c8756aff3b", ContainerID:"a600cd4fcb9b9c842aca8fae005f057b4a5eab3bb770f4cd1e8a560863545387", Pod:"csi-node-driver-zwfv8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali60f49803e14", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.952 [INFO][5818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.952 [INFO][5818] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" iface="eth0" netns="" Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.952 [INFO][5818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.952 [INFO][5818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.971 [INFO][5824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.971 [INFO][5824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.971 [INFO][5824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.980 [WARNING][5824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.980 [INFO][5824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" HandleID="k8s-pod-network.e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Workload="ci--4081.3.0--a--c8756aff3b-k8s-csi--node--driver--zwfv8-eth0" Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.982 [INFO][5824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:08:58.984875 containerd[1735]: 2025-01-17 12:08:58.983 [INFO][5818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1" Jan 17 12:08:58.985306 containerd[1735]: time="2025-01-17T12:08:58.984933868Z" level=info msg="TearDown network for sandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\" successfully" Jan 17 12:08:58.992813 containerd[1735]: time="2025-01-17T12:08:58.992755561Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:08:58.992923 containerd[1735]: time="2025-01-17T12:08:58.992870242Z" level=info msg="RemovePodSandbox \"e33dd1dc8e09dd5d708fb82d06f1dce606be9e24032cd0ae7bae2f877378f8a1\" returns successfully" Jan 17 12:09:04.331291 kubelet[3279]: I0117 12:09:04.331153 3279 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zwfv8" podStartSLOduration=35.133801437 podStartE2EDuration="43.329224052s" podCreationTimestamp="2025-01-17 12:08:21 +0000 UTC" firstStartedPulling="2025-01-17 12:08:48.352984589 +0000 UTC m=+50.600914942" lastFinishedPulling="2025-01-17 12:08:56.548407204 +0000 UTC m=+58.796337557" observedRunningTime="2025-01-17 12:08:57.204801619 +0000 UTC m=+59.452731972" watchObservedRunningTime="2025-01-17 12:09:04.329224052 +0000 UTC m=+66.577154405" Jan 17 12:09:06.631433 kubelet[3279]: I0117 12:09:06.631393 3279 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:10:04.275186 systemd[1]: run-containerd-runc-k8s.io-4702aa73da4b268e56a4e6ad07b18f8d97560d8d90533e345bf1eee4629e86d1-runc.FHrO0d.mount: Deactivated successfully. Jan 17 12:10:34.273803 systemd[1]: run-containerd-runc-k8s.io-4702aa73da4b268e56a4e6ad07b18f8d97560d8d90533e345bf1eee4629e86d1-runc.7KCfdy.mount: Deactivated successfully. Jan 17 12:11:24.567030 systemd[1]: Started sshd@7-10.200.20.40:22-10.200.16.10:47888.service - OpenSSH per-connection server daemon (10.200.16.10:47888). Jan 17 12:11:25.005257 sshd[6154]: Accepted publickey for core from 10.200.16.10 port 47888 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:25.007836 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:25.016980 systemd-logind[1690]: New session 10 of user core. Jan 17 12:11:25.021443 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:11:25.414156 sshd[6154]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:25.418042 systemd-logind[1690]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:11:25.418514 systemd[1]: sshd@7-10.200.20.40:22-10.200.16.10:47888.service: Deactivated successfully. Jan 17 12:11:25.421069 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:11:25.423045 systemd-logind[1690]: Removed session 10. Jan 17 12:11:30.500392 systemd[1]: Started sshd@8-10.200.20.40:22-10.200.16.10:40160.service - OpenSSH per-connection server daemon (10.200.16.10:40160). Jan 17 12:11:30.949815 sshd[6177]: Accepted publickey for core from 10.200.16.10 port 40160 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:30.951342 sshd[6177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:30.955744 systemd-logind[1690]: New session 11 of user core. Jan 17 12:11:30.959301 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:11:31.364424 sshd[6177]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:31.367997 systemd[1]: sshd@8-10.200.20.40:22-10.200.16.10:40160.service: Deactivated successfully. Jan 17 12:11:31.369857 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:11:31.370570 systemd-logind[1690]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:11:31.371802 systemd-logind[1690]: Removed session 11. Jan 17 12:11:36.449501 systemd[1]: Started sshd@9-10.200.20.40:22-10.200.16.10:54796.service - OpenSSH per-connection server daemon (10.200.16.10:54796). Jan 17 12:11:36.879336 sshd[6232]: Accepted publickey for core from 10.200.16.10 port 54796 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:36.880761 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:36.884653 systemd-logind[1690]: New session 12 of user core. Jan 17 12:11:36.890278 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:11:37.280936 sshd[6232]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:37.285790 systemd[1]: sshd@9-10.200.20.40:22-10.200.16.10:54796.service: Deactivated successfully. Jan 17 12:11:37.288450 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:11:37.289092 systemd-logind[1690]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:11:37.290631 systemd-logind[1690]: Removed session 12. Jan 17 12:11:37.369403 systemd[1]: Started sshd@10-10.200.20.40:22-10.200.16.10:54802.service - OpenSSH per-connection server daemon (10.200.16.10:54802). Jan 17 12:11:37.798424 sshd[6246]: Accepted publickey for core from 10.200.16.10 port 54802 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:37.800192 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:37.805446 systemd-logind[1690]: New session 13 of user core. Jan 17 12:11:37.811268 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:11:38.268488 sshd[6246]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:38.272716 systemd[1]: sshd@10-10.200.20.40:22-10.200.16.10:54802.service: Deactivated successfully. Jan 17 12:11:38.276036 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:11:38.278343 systemd-logind[1690]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:11:38.279648 systemd-logind[1690]: Removed session 13. Jan 17 12:11:38.351481 systemd[1]: Started sshd@11-10.200.20.40:22-10.200.16.10:54810.service - OpenSSH per-connection server daemon (10.200.16.10:54810). Jan 17 12:11:38.785433 sshd[6257]: Accepted publickey for core from 10.200.16.10 port 54810 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:38.787136 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:38.791207 systemd-logind[1690]: New session 14 of user core. Jan 17 12:11:38.802324 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:11:39.183477 sshd[6257]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:39.188057 systemd[1]: sshd@11-10.200.20.40:22-10.200.16.10:54810.service: Deactivated successfully. Jan 17 12:11:39.190200 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:11:39.191851 systemd-logind[1690]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:11:39.193383 systemd-logind[1690]: Removed session 14. Jan 17 12:11:44.271396 systemd[1]: Started sshd@12-10.200.20.40:22-10.200.16.10:54824.service - OpenSSH per-connection server daemon (10.200.16.10:54824). Jan 17 12:11:44.702688 sshd[6274]: Accepted publickey for core from 10.200.16.10 port 54824 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:44.704579 sshd[6274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:44.710522 systemd-logind[1690]: New session 15 of user core. Jan 17 12:11:44.715291 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:11:45.101353 sshd[6274]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:45.104022 systemd[1]: sshd@12-10.200.20.40:22-10.200.16.10:54824.service: Deactivated successfully. Jan 17 12:11:45.106950 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:11:45.108271 systemd-logind[1690]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:11:45.110401 systemd-logind[1690]: Removed session 15. Jan 17 12:11:50.176443 systemd[1]: Started sshd@13-10.200.20.40:22-10.200.16.10:50748.service - OpenSSH per-connection server daemon (10.200.16.10:50748). Jan 17 12:11:50.584118 sshd[6294]: Accepted publickey for core from 10.200.16.10 port 50748 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:50.585906 sshd[6294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:50.590575 systemd-logind[1690]: New session 16 of user core. Jan 17 12:11:50.595327 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:11:50.966603 sshd[6294]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:50.970925 systemd[1]: sshd@13-10.200.20.40:22-10.200.16.10:50748.service: Deactivated successfully. Jan 17 12:11:50.972988 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:11:50.973801 systemd-logind[1690]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:11:50.975193 systemd-logind[1690]: Removed session 16. Jan 17 12:11:56.050439 systemd[1]: Started sshd@14-10.200.20.40:22-10.200.16.10:45606.service - OpenSSH per-connection server daemon (10.200.16.10:45606). Jan 17 12:11:56.483789 sshd[6325]: Accepted publickey for core from 10.200.16.10 port 45606 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:11:56.485240 sshd[6325]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:11:56.489561 systemd-logind[1690]: New session 17 of user core. Jan 17 12:11:56.494289 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:11:56.877738 sshd[6325]: pam_unix(sshd:session): session closed for user core Jan 17 12:11:56.881374 systemd[1]: sshd@14-10.200.20.40:22-10.200.16.10:45606.service: Deactivated successfully. Jan 17 12:11:56.883363 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:11:56.884055 systemd-logind[1690]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:11:56.885326 systemd-logind[1690]: Removed session 17. Jan 17 12:12:01.961469 systemd[1]: Started sshd@15-10.200.20.40:22-10.200.16.10:45622.service - OpenSSH per-connection server daemon (10.200.16.10:45622). Jan 17 12:12:02.391946 sshd[6352]: Accepted publickey for core from 10.200.16.10 port 45622 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:02.393601 sshd[6352]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:02.398723 systemd-logind[1690]: New session 18 of user core. Jan 17 12:12:02.405314 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:12:02.787875 sshd[6352]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:02.792724 systemd[1]: sshd@15-10.200.20.40:22-10.200.16.10:45622.service: Deactivated successfully. Jan 17 12:12:02.795221 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:12:02.796411 systemd-logind[1690]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:12:02.797702 systemd-logind[1690]: Removed session 18. Jan 17 12:12:02.861921 systemd[1]: Started sshd@16-10.200.20.40:22-10.200.16.10:45632.service - OpenSSH per-connection server daemon (10.200.16.10:45632). Jan 17 12:12:03.293849 sshd[6365]: Accepted publickey for core from 10.200.16.10 port 45632 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:03.295802 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:03.301306 systemd-logind[1690]: New session 19 of user core. Jan 17 12:12:03.308425 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:12:03.784776 sshd[6365]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:03.787374 systemd[1]: sshd@16-10.200.20.40:22-10.200.16.10:45632.service: Deactivated successfully. Jan 17 12:12:03.789457 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:12:03.791166 systemd-logind[1690]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:12:03.792764 systemd-logind[1690]: Removed session 19. Jan 17 12:12:03.858920 systemd[1]: Started sshd@17-10.200.20.40:22-10.200.16.10:45648.service - OpenSSH per-connection server daemon (10.200.16.10:45648). Jan 17 12:12:04.274889 sshd[6397]: Accepted publickey for core from 10.200.16.10 port 45648 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:04.276602 sshd[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:04.281893 systemd-logind[1690]: New session 20 of user core. Jan 17 12:12:04.285454 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:12:06.346404 sshd[6397]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:06.349168 systemd[1]: sshd@17-10.200.20.40:22-10.200.16.10:45648.service: Deactivated successfully. Jan 17 12:12:06.351652 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:12:06.353932 systemd-logind[1690]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:12:06.355206 systemd-logind[1690]: Removed session 20. Jan 17 12:12:06.424319 systemd[1]: Started sshd@18-10.200.20.40:22-10.200.16.10:50368.service - OpenSSH per-connection server daemon (10.200.16.10:50368). Jan 17 12:12:06.851958 sshd[6438]: Accepted publickey for core from 10.200.16.10 port 50368 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:06.853834 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:06.857892 systemd-logind[1690]: New session 21 of user core. Jan 17 12:12:06.862322 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:12:07.371065 sshd[6438]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:07.374910 systemd-logind[1690]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:12:07.375529 systemd[1]: sshd@18-10.200.20.40:22-10.200.16.10:50368.service: Deactivated successfully. Jan 17 12:12:07.378878 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:12:07.380051 systemd-logind[1690]: Removed session 21. Jan 17 12:12:07.456444 systemd[1]: Started sshd@19-10.200.20.40:22-10.200.16.10:50374.service - OpenSSH per-connection server daemon (10.200.16.10:50374). Jan 17 12:12:07.879542 sshd[6449]: Accepted publickey for core from 10.200.16.10 port 50374 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:07.881352 sshd[6449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:07.887160 systemd-logind[1690]: New session 22 of user core. Jan 17 12:12:07.890293 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:12:08.275402 sshd[6449]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:08.279256 systemd-logind[1690]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:12:08.279862 systemd[1]: sshd@19-10.200.20.40:22-10.200.16.10:50374.service: Deactivated successfully. Jan 17 12:12:08.282263 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:12:08.283399 systemd-logind[1690]: Removed session 22. Jan 17 12:12:13.356684 systemd[1]: Started sshd@20-10.200.20.40:22-10.200.16.10:50376.service - OpenSSH per-connection server daemon (10.200.16.10:50376). Jan 17 12:12:13.758656 sshd[6464]: Accepted publickey for core from 10.200.16.10 port 50376 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:13.759991 sshd[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:13.765164 systemd-logind[1690]: New session 23 of user core. Jan 17 12:12:13.774286 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:12:14.132548 sshd[6464]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:14.136026 systemd[1]: sshd@20-10.200.20.40:22-10.200.16.10:50376.service: Deactivated successfully. Jan 17 12:12:14.138144 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:12:14.138914 systemd-logind[1690]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:12:14.141591 systemd-logind[1690]: Removed session 23. Jan 17 12:12:19.228411 systemd[1]: Started sshd@21-10.200.20.40:22-10.200.16.10:44076.service - OpenSSH per-connection server daemon (10.200.16.10:44076). Jan 17 12:12:19.676455 sshd[6479]: Accepted publickey for core from 10.200.16.10 port 44076 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:19.677925 sshd[6479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:19.682830 systemd-logind[1690]: New session 24 of user core. Jan 17 12:12:19.686323 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:12:20.088359 sshd[6479]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:20.092312 systemd-logind[1690]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:12:20.092588 systemd[1]: sshd@21-10.200.20.40:22-10.200.16.10:44076.service: Deactivated successfully. Jan 17 12:12:20.095459 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:12:20.099761 systemd-logind[1690]: Removed session 24. Jan 17 12:12:25.166458 systemd[1]: Started sshd@22-10.200.20.40:22-10.200.16.10:44088.service - OpenSSH per-connection server daemon (10.200.16.10:44088). Jan 17 12:12:25.574117 sshd[6492]: Accepted publickey for core from 10.200.16.10 port 44088 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:25.575462 sshd[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:25.579253 systemd-logind[1690]: New session 25 of user core. Jan 17 12:12:25.587247 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:12:25.941781 sshd[6492]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:25.947168 systemd-logind[1690]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:12:25.947671 systemd[1]: sshd@22-10.200.20.40:22-10.200.16.10:44088.service: Deactivated successfully. Jan 17 12:12:25.949777 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:12:25.950753 systemd-logind[1690]: Removed session 25. Jan 17 12:12:31.029384 systemd[1]: Started sshd@23-10.200.20.40:22-10.200.16.10:58320.service - OpenSSH per-connection server daemon (10.200.16.10:58320). Jan 17 12:12:31.457328 sshd[6505]: Accepted publickey for core from 10.200.16.10 port 58320 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:31.458700 sshd[6505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:31.463321 systemd-logind[1690]: New session 26 of user core. Jan 17 12:12:31.470300 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:12:31.835313 sshd[6505]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:31.837985 systemd-logind[1690]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:12:31.838277 systemd[1]: sshd@23-10.200.20.40:22-10.200.16.10:58320.service: Deactivated successfully. Jan 17 12:12:31.839968 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:12:31.842790 systemd-logind[1690]: Removed session 26. Jan 17 12:12:36.910418 systemd[1]: Started sshd@24-10.200.20.40:22-10.200.16.10:60750.service - OpenSSH per-connection server daemon (10.200.16.10:60750). Jan 17 12:12:37.315451 sshd[6562]: Accepted publickey for core from 10.200.16.10 port 60750 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:37.317184 sshd[6562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:37.321691 systemd-logind[1690]: New session 27 of user core. Jan 17 12:12:37.330306 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:12:37.678614 sshd[6562]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:37.682445 systemd-logind[1690]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:12:37.682626 systemd[1]: sshd@24-10.200.20.40:22-10.200.16.10:60750.service: Deactivated successfully. Jan 17 12:12:37.685380 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:12:37.688385 systemd-logind[1690]: Removed session 27. Jan 17 12:12:42.766396 systemd[1]: Started sshd@25-10.200.20.40:22-10.200.16.10:60756.service - OpenSSH per-connection server daemon (10.200.16.10:60756). Jan 17 12:12:43.219964 sshd[6575]: Accepted publickey for core from 10.200.16.10 port 60756 ssh2: RSA SHA256:G4lMbssvChlhnp7djbcd9tTo5eVcsl9af0MkzK1+MB4 Jan 17 12:12:43.221564 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:12:43.225400 systemd-logind[1690]: New session 28 of user core. Jan 17 12:12:43.231250 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:12:43.620325 sshd[6575]: pam_unix(sshd:session): session closed for user core Jan 17 12:12:43.623291 systemd[1]: sshd@25-10.200.20.40:22-10.200.16.10:60756.service: Deactivated successfully. Jan 17 12:12:43.625338 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:12:43.626934 systemd-logind[1690]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:12:43.628043 systemd-logind[1690]: Removed session 28.